Search Results: "iari"

1 June 2021

Robert McQueen: Next steps for the GNOME Foundation

As the President of the GNOME Foundation Board of Directors, I m really pleased to see the number and breadth of candidates we have for this year s election. Thank you to everyone who has submitted their candidacy and volunteered their time to support the Foundation. Allan has recently blogged about how the board has been evolving, and I wanted to follow that post by talking about where the GNOME Foundation is in terms of its strategy. This may be helpful as people consider which candidates might bring the best skills to shape the Foundation s next steps. Around three years ago, the Foundation received a number of generous donations, and Rosanna (Director of Operations) gave a presentation at GUADEC about her and Neil s (Executive Director, essentially the CEO of the Foundation) plans to use these funds to transform the Foundation. We would grow our activities, increasing the pace of events, outreach, development and infrastructure that supported the GNOME project and the wider desktop ecosystem and, crucially, would grow our funding to match this increased level of activity. I think it s fair to say that half of this has been a great success we ve got a larger staff team than GNOME has ever had before. We ve widened the GNOME software ecosystem to include related apps and projects under the GNOME Circle banner, we ve helped get GTK 4 out of the door, run a wider-reaching program in the Community Engagement Challenge, and consistently supported better infrastructure for both GNOME and the Linux app community in Flathub. Aside from another grant from Endless (note: my employer), our fundraising hasn t caught up with this pace of activities. As a result, the Board recently approved a budget for this financial year which will spend more funds from our reserves than we expect to raise in income. Due to our reserves policy, this is essentially the last time we can do this: over the next 6-12 months we need to either raise more money, or start spending less. For clarity the Foundation is fit and well from a financial perspective we have a very healthy bank balance, and a very conservative 12 month run rate reserve policy to handle fluctuations in income. If we do have to slow down some of our activities, we will return to a steady state where our regular individual donations and corporate contributions can support a smaller staff team that supports the events and infrastructure we ve come to rely on. However, this isn t what the Board wants to do the previous and current boards were unanimous in their support of the idea that we should be ambitious: try to do more in the world and bring the benefits of GNOME to more people. We want to take our message of trusted, affordable and accessible computing to the wider world. Typically, a lot of the activities of the Foundation have been very inwards-facing supporting and engaging with either the existing GNOME or Open Source communities. This is a very restricted audience in terms of fundraising many corporate actors in our community already support GNOME hugely in terms of both financial and in-kind contributions, and many OSS users are already supporters either through volunteer contributions or donating to those nonprofits that they feel are most relevant and important to them. To raise funds from new sources, the Foundation needs to take the message and ideals of GNOME and Open Source software to new, wider audiences that we can help. We ve been developing themes such as affordability, privacy/trust and education as promising areas for new programs that broaden our impact. The goal is to find projects and funding that allow us to both invest in the GNOME community and find new ways for FOSS to benefit people who aren t already in our community. Bringing it back to the election, I d like to make clear that I see this reaching the outside world, and finding funding to support that as the main priority and responsibility of the Board for the next term. GNOME Foundation elections are a slightly unusual process that filters our board nominees by being existing Foundation members, which means that candidates already work inside our community when they stand for election. If you re a candidate and are already active in the community THANK YOU you re doing great work, keep doing it! That said, you don t need to be a Director to achieve things within our community or gain the support of the Foundation: being a community leader is already a fantastic and important role. The Foundation really needs support from the Board to make a success of the next 12-18 months. We need to understand our financial situation and the trade-offs we have to make, and help to define the strategy with the Executive Director so that we can launch some new programs that will broaden our impact and funding for the future. As people cast their votes, I d like people to think about what kind of skills building partnerships, commercial background, familiarity with finances, experience in nonprofit / impact spaces, etc will help the Board make the Foundation as successful as it can be during the next term.

20 May 2021

Jonathan McDowell: Losing control to Kubernetes

GMK NucBox Kubernetes is about giving up control. As someone who likes to understand what s going on that s made it hard for me to embrace it. I ve also mostly been able to ignore it, which has helped. However I m aware it s incredibly popular, and there s some infrastructure at work that uses it. While it s not my responsibility I always find having an actual implementation of something is useful in understanding it generally, so I decided it was time to dig in and learn something new. First up, I should say I understand the trade-off here about handing a bunch of decisions off to Kubernetes about the underlying platform allowing development/deployment to concentrate on a nice consistent environment. I get the analogy with the shipping container model where you can abstract out both sides knowing all you have to do is conform to the TEU API. In terms of the underlying concepts I ve got some virtualisation and container experience, so I m not coming at this as a complete newcomer. And I understand multi-site dynamically routed networks. That said, let s start with a basic goal. I d like to understand k8s (see, I can be cool and use the short name) enough to be comfortable with what s going on under the hood and be able to examine a running instance safely (i.e. enough confidence about pulling logs, probing state etc without fearing I might modify state). That ll mean when I come across such infrastructure I have enough tools to be able to hopefully learn from it. To do this I figure I ll need to build myself a cluster and deploy some things on it, then poke it. I ll start by doing so on bare metal; that removes variables around cloud providers and virtualisation and gives me an environment I know is isolated from everything else. I happen to have a GMK NucBox available, so I ll use that. As a first step I m aiming to get a single node cluster deployed running some sort of web accessible service that is visible from the rest of my network. That should mean I ve covered the basics of a Kubernetes install, a running service and actually making it accessible. Of course I m running Debian. I ve got a Bullseye (Debian 11) install - not yet released as stable, but in freeze and therefore not a moving target. I wanted to use packages from Debian as much as possible but it seems that the bits of Kubernetes available in main are mostly just building blocks and not a great starting point for someone new to Kubernetes. So to do the initial install I did the following:
# Install docker + nftables from Debian
apt install docker.io nftables
# Add the Kubernetes repo and signing key
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg > /etc/apt/k8s.gpg
cat > /etc/apt/sources.list.d/kubernetes.list <<EOF
deb [signed-by=/etc/apt/k8s.gpg] https://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt update
apt install kubelet kubeadm kubectl
That resulted in a 1.21.1-00 install, which is current at the time of writing. I then used kubeadm to create the cluster:
kubeadm init --apiserver-advertise-address 192.168.53.147 --apiserver-cert-extra-sans udon.mynetwork
The extra parameters were to make the API server externally accessible from the host. I don t know if that was a good idea or not at this stage kubeadm spat out a bunch of instructions but the key piece was about copying the credentials to my user account. So I did:
mkdir ~noodles/.kube
cp -i /etc/kubernetes/admin.conf ~noodles/.kube/config
chown -R noodles ~noodles/.kube/
I then was able to see my pod:
noodles@udon:~$ kubectl get nodes
NAME   STATUS     ROLES                  AGE     VERSION
udon   NotReady   control-plane,master   4m31s   v1.21.1
Ooooh. But why s it NotReady? Seems like it s a networking issue and I need to install a networking provider. The documentation on this is appalling. Flannel gets recommended as a simple option but then turns out to need a --pod-network-cidr option passed to kubeadm and I didn t feel like cleaning up and running again (I ve omitted all the false starts it took me to get to this point). Another pointer was to Weave so I decided to try that with the following magic runes:
mkdir -p /var/lib/weave
head -c 16 /dev/urandom   shasum -a 256   cut -d " " -f1 > /var/lib/weave/weave-passwd
kubectl create secret -n kube-system generic weave-passwd --from-file=/var/lib/weave/weave-passwd
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version   base64   tr -d '\n')&password-secret=weave-passwd&env.IPALLOC_RANGE=192.168.0.0/24"
(I believe what that s doing is the first 3 lines create a password and store it into the internal Kubernetes config so the weave pod can retrieve it. The final line then grabs a YAML config from Weaveworks to configure up weave. My intention is to delve deeper into what s going on here later; for now the primary purpose is to get up and running.) As I m running a single node cluster I then had to untaint my control node so I could use it as a worker node too:
kubectl taint nodes --all node-role.kubernetes.io/master-
And then:
noodles@udon:~$ kubectl get nodes
NAME   STATUS   ROLES                  AGE   VERSION
udon   Ready    control-plane,master   15m   v1.21.1
Result. What s actually running? Nothing except the actual system stuff, so we need to ask for all namespaces:
noodles@udon:~$ kubectl get pods --all-namespaces
NAMESPACE     NAME                           READY   STATUS    RESTARTS   AGE
kube-system   coredns-558bd4d5db-4nvrg       1/1     Running   0          18m
kube-system   coredns-558bd4d5db-flrfq       1/1     Running   0          18m
kube-system   etcd-udon                      1/1     Running   0          18m
kube-system   kube-apiserver-udon            1/1     Running   0          18m
kube-system   kube-controller-manager-udon   1/1     Running   0          18m
kube-system   kube-proxy-6d8kg               1/1     Running   0          18m
kube-system   kube-scheduler-udon            1/1     Running   0          18m
kube-system   weave-net-mchmg                2/2     Running   1          3m26s
These are all things I m going to have to learn about, but for now I ll nod and smile and pretend I understand. Now I want to actually deploy something to the cluster. I ended up with a simple HTTP echoserver (though it s not entirely clear that s actually the source for what I ended up pulling):
$ kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.10
deployment.apps/hello-node created
$ kubectl get pod
NAME                          READY   STATUS    RESTARTS   AGE
hello-node-59bffcc9fd-8hkgb   1/1     Running   0          36s
$ kubectl expose deployment hello-node --type=NodePort --port=8080
$ kubectl get services
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
hello-node   NodePort    10.107.66.138   <none>        8080:31529/TCP   1m
Looks good. And to test locally:
curl http://10.107.66.138:8080/

Hostname: hello-node-59bffcc9fd-8hkgb
Pod Information:
	-no pod information available-
Server values:
	server_version=nginx: 1.13.3 - lua: 10008
Request Information:
	client_address=192.168.53.147
	method=GET
	real path=/
	query=
	request_version=1.1
	request_scheme=http
	request_uri=http://10.107.66.138:8080/
Request Headers:
	accept=*/*
	host=10.107.66.138:8080
	user-agent=curl/7.74.0
Request Body:
	-no body in request-
Neat. But my external network is 192.168.53.0/24 and that s a 10.* address so how do I actually make it visible to other hosts? What I seem to need is an Ingress Controller which provide some sort of proxy between the outside world and pods within the cluster. Let s pick nginx because at least I have some vague familiarity with that and it seems like it should be able to do a bunch of HTTP redirection to different pods depending on the incoming request.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.46.0/deploy/static/provider/cloud/deploy.yaml
I then want to expose the hello-node to the outside world and I finally had to write some YAML:
cat > hello-ingress.yaml <<EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: example-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
  rules:
    - host: udon.mynetwork
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: hello-node
                port:
                  number: 8080
EOF
i.e. incoming requests to http://udon.mynetwork/ should go to the hello-node on port 8080. I applied this:
$ kubectl apply -f hello-ingress.yaml
ingress.networking.k8s.io/example-ingress created
$ kubectl get ingress
NAME              CLASS    HOSTS            ADDRESS   PORTS   AGE
example-ingress   <none>   udon.mynetwork             80      3m8s
No address? What have I missed? Let s check the nginx service, which apparently lives in the ingress-nginx namespace:
noodles@udon:~$ kubectl get services -n ingress-nginx
NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                    AGE
ingress-nginx-controller             LoadBalancer   10.96.9.41      <pending>     80:32740/TCP,443:30894/TCP 13h
ingress-nginx-controller-admission   ClusterIP      10.111.16.129   <none>        443/TCP                    13h
<pending> does not seem like something I want. Digging around it seems I need to configure the external IP. So I do:
kubectl patch svc ingress-nginx-controller -n ingress-nginx -p \
	' "spec":  "type": "LoadBalancer", "externalIPs":["192.168.53.147"] '
and things look happier:
noodles@udon:~$ kubectl get services -n ingress-nginx
NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                 AGE
ingress-nginx-controller             LoadBalancer   10.96.9.41      192.168.53.147   80:32740/TCP,443:30894/TCP   14h
ingress-nginx-controller-admission   ClusterIP      10.111.16.129   <none>           443/TCP                 14h
noodles@udon:~$ kubectl get ingress
NAME              CLASS    HOSTS           ADDRESS          PORTS   AGE
example-ingress   <none>   udon.mynetwork  192.168.53.147   80      14h
Let s try a curl from a remote host:
curl http://udon.mynetwork/

Hostname: hello-node-59bffcc9fd-8hkgb
Pod Information:
	-no pod information available-
Server values:
	server_version=nginx: 1.13.3 - lua: 10008
Request Information:
	client_address=192.168.0.5
	method=GET
	real path=/
	query=
	request_version=1.1
	request_scheme=http
	request_uri=http://udon.mynetwork:8080/
Request Headers:
	accept=*/*
	host=udon.mynetwork
	user-agent=curl/7.64.0
	x-forwarded-for=192.168.53.136
	x-forwarded-host=udon.mynetwork
	x-forwarded-port=80
	x-forwarded-proto=http
	x-real-ip=192.168.53.136
	x-request-id=6aaef8feaaa4c7d07c60b2d05c45f75c
	x-scheme=http
Request Body:
	-no body in request-
Ok, so that seems like success. I ve got a single node cluster running a single actual application pod (the echoserver) and exporting it to the outside world. That s enough to start poking under the hood. Which is for another post, as this one is already getting longer than I d like. I ll just leave some final thoughts of things I need to work out:

12 February 2021

Sylvain Beucler: Godot GDScript REPL

When experimenting with Godot and its GDScript language, I realized that I missed a good old REPL (Read-Eval-Print Loop) to familiarize myself with the language and API. This is now possible with this new Godot Editor plugin :) Try it at:
https://godotengine.org/asset-library/asset/857

7 February 2021

Chris Lamb: Favourite books of 2020

I won't reveal precisely how many books I read in 2020, but it was definitely an improvement on 74 in 2019, 53 in 2018 and 50 in 2017. But not only did I read more in a quantitative sense, the quality seemed higher as well. There were certainly fewer disappointments: given its cultural resonance, I was nonplussed by Nick Hornby's Fever Pitch and whilst Ian Fleming's The Man with the Golden Gun was a little thin (again, given the obvious influence of the Bond franchise) the booked lacked 'thinness' in a way that made it interesting to critique. The weakest novel I read this year was probably J. M. Berger's Optimal, but even this hybrid of Ready Player One late-period Black Mirror wasn't that cringeworthy, all things considered. Alas, graphic novels continue to not quite be my thing, I'm afraid. I perhaps experienced more disappointments in the non-fiction section. Paul Bloom's Against Empathy was frustrating, particularly in that it expended unnecessary energy battling its misleading title and accepted terminology, and it could so easily have been an 20-minute video essay instead). (Elsewhere in the social sciences, David and Goliath will likely be the last Malcolm Gladwell book I voluntarily read.) After so many positive citations, I was also more than a little underwhelmed by Shoshana Zuboff's The Age of Surveillance Capitalism, and after Ryan Holiday's many engaging reboots of Stoic philosophy, his Conspiracy (on Peter Thiel and Hulk Hogan taking on Gawker) was slightly wide of the mark for me. Anyway, here follows a selection of my favourites from 2020, in no particular order:

Fiction Wolf Hall & Bring Up the Bodies & The Mirror and the Light Hilary Mantel During the early weeks of 2020, I re-read the first two parts of Hilary Mantel's Thomas Cromwell trilogy in time for the March release of The Mirror and the Light. I had actually spent the last few years eagerly following any news of the final instalment, feigning outrage whenever Mantel appeared to be spending time on other projects. Wolf Hall turned out to be an even better book than I remembered, and when The Mirror and the Light finally landed at midnight on 5th March, I began in earnest the next morning. Note that date carefully; this was early 2020, and the book swiftly became something of a heavy-handed allegory about the world at the time. That is to say and without claiming that I am Monsieur Cromuel in any meaningful sense it was an uneasy experience to be reading about a man whose confident grasp on his world, friends and life was slipping beyond his control, and at least in Cromwell's case, was heading inexorably towards its denouement. The final instalment in Mantel's trilogy is not perfect, and despite my love of her writing I would concur with the judges who decided against awarding her a third Booker Prize. For instance, there is something of the longueur that readers dislike in the second novel, although this might not be entirely Mantel's fault after all, the rise of the "ugly" Anne of Cleves and laborious trade negotiations for an uninspiring mineral (this is no Herbertian 'spice') will never match the court intrigues of Anne Boleyn, Jane Seymour and that man for all seasons, Thomas More. Still, I am already looking forward to returning to the verbal sparring between King Henry and Cromwell when I read the entire trilogy once again, tentatively planned for 2022.

The Fault in Our Stars John Green I came across John Green's The Fault in Our Stars via a fantastic video by Lindsay Ellis discussing Roland Barthes famous 1967 essay on authorial intent. However, I might have eventually come across The Fault in Our Stars regardless, not because of Green's status as an internet celebrity of sorts but because I'm a complete sucker for this kind of emotionally-manipulative bildungsroman, likely due to reading Philip Pullman's His Dark Materials a few too many times in my teens. Although its title is taken from Shakespeare's Julius Caesar, The Fault in Our Stars is actually more Romeo & Juliet. Hazel, a 16-year-old cancer patient falls in love with Gus, an equally ill teen from her cancer support group. Hazel and Gus share the same acerbic (and distinctly unteenage) wit and a love of books, centred around Hazel's obsession of An Imperial Affliction, a novel by the meta-fictional author Peter Van Houten. Through a kind of American version of Jim'll Fix It, Gus and Hazel go and visit Van Houten in Amsterdam. I'm afraid it's even cheesier than I'm describing it. Yet just as there is a time and a place for Michelin stars and Haribo Starmix, there's surely a place for this kind of well-constructed but altogether maudlin literature. One test for emotionally manipulative works like this is how well it can mask its internal contradictions while Green's story focuses on the universalities of love, fate and the shortness of life (as do almost all of his works, it seems), The Fault in Our Stars manages to hide, for example, that this is an exceedingly favourable treatment of terminal illness that is only possible for the better off. The 2014 film adaptation does somewhat worse in peddling this fantasy (and has a much weaker treatment of the relationship between the teens' parents too, an underappreciated subtlety of the book). The novel, however, is pretty slick stuff, and it is difficult to fault it for what it is. For some comparison, I later read Green's Looking for Alaska and Paper Towns which, as I mention, tug at many of the same strings, but they don't come together nearly as well as The Fault in Our Stars. James Joyce claimed that "sentimentality is unearned emotion", and in this respect, The Fault in Our Stars really does earn it.

The Plague Albert Camus P. D. James' The Children of Men, George Orwell's Nineteen Eighty-Four, Arthur Koestler's Darkness at Noon ... dystopian fiction was already a theme of my reading in 2020, so given world events it was an inevitability that I would end up with Camus's novel about a plague that swept through the Algerian city of Oran. Is The Plague an allegory about the Nazi occupation of France during World War Two? Where are all the female characters? Where are the Arab ones? Since its original publication in 1947, there's been so much written about The Plague that it's hard to say anything new today. Nevertheless, I was taken aback by how well it captured so much of the nuance of 2020. Whilst we were saying just how 'unprecedented' these times were, it was eerie how a novel written in the 1940s could accurately how many of us were feeling well over seventy years on later: the attitudes of the people; the confident declarations from the institutions; the misaligned conversations that led to accidental misunderstandings. The disconnected lovers. The only thing that perhaps did not work for me in The Plague was the 'character' of the church. Although I could appreciate most of the allusion and metaphor, it was difficult for me to relate to the significance of Father Paneloux, particularly regarding his change of view on the doctrinal implications of the virus, and spoiler alert that he finally died of a "doubtful case" of the disease, beyond the idea that Paneloux's beliefs are in themselves "doubtful". Answers on a postcard, perhaps. The Plague even seemed to predict how we, at least speaking of the UK, would react when the waves of the virus waxed and waned as well:
The disease stiffened and carried off three or four patients who were expected to recover. These were the unfortunates of the plague, those whom it killed when hope was high
It somehow captured the nostalgic yearning for high-definition videos of cities and public transport; one character even visits the completely deserted railway station in Oman simply to read the timetables on the wall.

Tinker, Tailor, Soldier, Spy John le Carr There's absolutely none of the Mad Men glamour of James Bond in John le Carr 's icy world of Cold War spies:
Small, podgy, and at best middle-aged, Smiley was by appearance one of London's meek who do not inherit the earth. His legs were short, his gait anything but agile, his dress costly, ill-fitting, and extremely wet.
Almost a direct rebuttal to Ian Fleming's 007, Tinker, Tailor has broken-down cars, bad clothes, women with their own internal and external lives (!), pathetically primitive gadgets, and (contra Mad Men) hangovers that significantly longer than ten minutes. In fact, the main aspect that the mostly excellent 2011 film adaption doesn't really capture is the smoggy and run-down nature of 1970s London this is not your proto-Cool Britannia of Austin Powers or GTA:1969, the city is truly 'gritty' in the sense there is a thin film of dirt and grime on every surface imaginable. Another angle that the film cannot capture well is just how purposefully the novel does not mention the United States. Despite the US obviously being the dominant power, the British vacillate between pretending it doesn't exist or implying its irrelevance to the matter at hand. This is no mistake on Le Carr 's part, as careful readers are rewarded by finding this denial of US hegemony in metaphor throughout --pace Ian Fleming, there is no obvious Felix Leiter to loudly throw money at the problem or a Sheriff Pepper to serve as cartoon racist for the Brits to feel superior about. By contrast, I recall that a clever allusion to "dusty teabags" is subtly mirrored a few paragraphs later with a reference to the installation of a coffee machine in the office, likely symbolic of the omnipresent and unavoidable influence of America. (The officer class convince themselves that coffee is a European import.) Indeed, Le Carr communicates a feeling of being surrounded on all sides by the peeling wallpaper of Empire. Oftentimes, the writing style matches the graceless and inelegance of the world it depicts. The sentences are dense and you find your brain performing a fair amount of mid-flight sentence reconstruction, reparsing clauses, commas and conjunctions to interpret Le Carr 's intended meaning. In fact, in his eulogy-cum-analysis of Le Carr 's writing style, William Boyd, himself a ventrioquilist of Ian Fleming, named this intentional technique 'staccato'. Like the musical term, I suspect the effect of this literary staccato is as much about the impact it makes on a sentence as the imperceptible space it generates after it. Lastly, the large cast in this sprawling novel is completely believable, all the way from the Russian spymaster Karla to minor schoolboy Roach the latter possibly a stand-in for Le Carr himself. I got through the 500-odd pages in just a few days, somehow managing to hold the almost-absurdly complicated plot in my head. This is one of those classic books of the genre that made me wonder why I had not got around to it before.

The Nickel Boys Colson Whitehead According to the judges who awarded it the Pulitzer Prize for Fiction, The Nickel Boys is "a devastating exploration of abuse at a reform school in Jim Crow-era Florida" that serves as a "powerful tale of human perseverance, dignity and redemption". But whilst there is plenty of this perseverance and dignity on display, I found little redemption in this deeply cynical novel. It could almost be read as a follow-up book to Whitehead's popular The Underground Railroad, which itself won the Pulitzer Prize in 2017. Indeed, each book focuses on a young protagonist who might be euphemistically referred to as 'downtrodden'. But The Nickel Boys is not only far darker in tone, it feels much closer and more connected to us today. Perhaps this is unsurprising, given that it is based on the story of the Dozier School in northern Florida which operated for over a century before its long history of institutional abuse and racism was exposed a 2012 investigation. Nevertheless, if you liked the social commentary in The Underground Railroad, then there is much more of that in The Nickel Boys:
Perhaps his life might have veered elsewhere if the US government had opened the country to colored advancement like they opened the army. But it was one thing to allow someone to kill for you and another to let him live next door.
Sardonic aper us of this kind are pretty relentless throughout the book, but it never tips its hand too far into on nihilism, especially when some of the visual metaphors are often first-rate: "An American flag sighed on a pole" is one I can easily recall from memory. In general though, The Nickel Boys is not only more world-weary in tenor than his previous novel, the United States it describes seems almost too beaten down to have the energy conjure up the Swiftian magical realism that prevented The Underground Railroad from being overly lachrymose. Indeed, even we Whitehead transports us a present-day New York City, we can't indulge in another kind of fantasy, the one where America has solved its problems:
The Daily News review described the [Manhattan restaurant] as nouveau Southern, "down-home plates with a twist." What was the twist that it was soul food made by white people?
It might be overly reductionist to connect Whitehead's tonal downshift with the racial justice movements of the past few years, but whatever the reason, we've ended up with a hard-hitting, crushing and frankly excellent book.

True Grit & No Country for Old Men Charles Portis & Cormac McCarthy It's one of the most tedious cliches to claim the book is better than the film, but these two books are of such high quality that even the Coen Brothers at their best cannot transcend them. I'm grouping these books together here though, not because their respective adaptations will exemplify some of the best cinema of the 21st century, but because of their superb treatment of language. Take the use of dialogue. Cormac McCarthy famously does not use any punctuation "I believe in periods, in capitals, in the occasional comma, and that's it" but the conversations in No Country for Old Men together feel familiar and commonplace, despite being relayed through this unconventional technique. In lesser hands, McCarthy's written-out Texan drawl would be the novelistic equivalent of white rap or Jar Jar Binks, but not only is the effect entirely gripping, it helps you to believe you are physically present in the many intimate and domestic conversations that hold this book together. Perhaps the cinematic familiarity helps, as you can almost hear Tommy Lee Jones' voice as Sheriff Bell from the opening page to the last. Charles Portis' True Grit excels in its dialogue too, but in this book it is not so much in how it flows (although that is delightful in its own way) but in how forthright and sardonic Maddie Ross is:
"Earlier tonight I gave some thought to stealing a kiss from you, though you are very young, and sick and unattractive to boot, but now I am of a mind to give you five or six good licks with my belt." "One would be as unpleasant as the other."
Perhaps this should be unsurprising. Maddie, a fourteen-year-old girl from Yell County, Arkansas, can barely fire her father's heavy pistol, so she can only has words to wield as her weapon. Anyway, it's not just me who treasures this book. In her encomium that presages most modern editions, Donna Tartt of The Secret History fame traces the novels origins through Huckleberry Finn, praising its elegance and economy: "The plot of True Grit is uncomplicated and as pure in its way as one of the Canterbury Tales". I've read any Chaucer, but I am inclined to agree. Tartt also recalls that True Grit vanished almost entirely from the public eye after the release of John Wayne's flimsy cinematic vehicle in 1969 this earlier film was, Tartt believes, "good enough, but doesn't do the book justice". As it happens, reading a book with its big screen adaptation as a chaser has been a minor theme of my 2020, including P. D. James' The Children of Men, Kazuo Ishiguro's Never Let Me Go, Patricia Highsmith's Strangers on a Train, James Ellroy's The Black Dahlia, John Green's The Fault in Our Stars, John le Carr 's Tinker, Tailor Soldier, Spy and even a staged production of Charles Dicken's A Christmas Carol streamed from The Old Vic. For an autodidact with no academic background in literature or cinema, I've been finding this an effective and enjoyable means of getting closer to these fine books and films it is precisely where they deviate (or perhaps where they are deficient) that offers a means by which one can see how they were constructed. I've also found that adaptations can also tell you a lot about the culture in which they were made: take the 'straightwashing' in the film version of Strangers on a Train (1951) compared to the original novel, for example. It is certainly true that adaptions rarely (as Tartt put it) "do the book justice", but she might be also right to alight on a legal metaphor, for as the saying goes, to judge a movie in comparison to the book is to do both a disservice.

The Glass Hotel Emily St. John Mandel In The Glass Hotel, Mandel somehow pulls off the impossible; writing a loose roman- -clef on Bernie Madoff, a Ponzi scheme and the ephemeral nature of finance capital that is tranquil and shimmeringly beautiful. Indeed, don't get the wrong idea about the subject matter; this is no over over-caffeinated The Big Short, as The Glass Hotel is less about a Madoff or coked-up financebros but the fragile unreality of the late 2010s, a time which was, as we indeed discovered in 2020, one event away from almost shattering completely. Mandel's prose has that translucent, phantom quality to it where the chapters slip through your fingers when you try to grasp at them, and the plot is like a ghost ship that that slips silently, like the Mary Celeste, onto the Canadian water next to which the eponymous 'Glass Hotel' resides. Indeed, not unlike The Overlook Hotel, the novel so overflows with symbolism so that even the title needs to evoke the idea of impermanence permanently living in a hotel might serve as a house, but it won't provide a home. It's risky to generalise about such things post-2016, but the whole story sits in that the infinitesimally small distance between perception and reality, a self-constructed culture that is not so much 'post truth' but between them. There's something to consider in almost every character too. Take the stand-in for Bernie Madoff: no caricature of Wall Street out of a 1920s political cartoon or Brechtian satire, Jonathan Alkaitis has none of the oleaginous sleaze of a Dominic Strauss-Kahn, the cold sociopathy of a Marcus Halberstam nor the well-exercised sinuses of, say, Jordan Belford. Alkaitis is dare I say it? eminently likeable, and the book is all the better for it. Even the C-level characters have something to say: Enrico, trivially escaping from the regulators (who are pathetically late to the fraud without Mandel ever telling us explicitly), is daydreaming about the girlfriend he abandoned in New York: "He wished he'd realised he loved her before he left". What was in his previous life that prevented him from doing so? Perhaps he was never in love at all, or is love itself just as transient as the imaginary money in all those bank accounts? Maybe he fell in love just as he crossed safely into Mexico? When, precisely, do we fall in love anyway? I went on to read Mandel's Last Night in Montreal, an early work where you can feel her reaching for that other-worldly quality that she so masterfully achieves in The Glass Hotel. Her f ted Station Eleven is on my must-read list for 2021. "What is truth?" asked Pontius Pilate. Not even Mandel cannot give us the answer, but this will certainly do for now.

Running the Light Sam Tallent Although it trades in all of the clich s and stereotypes of the stand-up comedian (the triumvirate of drink, drugs and divorce), Sam Tallent's debut novel depicts an extremely convincing fictional account of a touring road comic. The comedian Doug Stanhope (who himself released a fairly decent No Encore for the Donkey memoir in 2020) hyped Sam's book relentlessly on his podcast during lockdown... and justifiably so. I ripped through Running the Light in a few short hours, the only disappointment being that I can't seem to find videos online of Sam that come anywhere close to match up to his writing style. If you liked the rollercoaster energy of Paul Beatty's The Sellout, the cynicism of George Carlin and the car-crash invertibility of final season Breaking Bad, check this great book out.

Non-fiction Inside Story Martin Amis This was my first introduction to Martin Amis's work after hearing that his "novelised autobiography" contained a fair amount about Christopher Hitchens, an author with whom I had a one of those rather clich d parasocial relationship with in the early days of YouTube. (Hey, it could have been much worse.) Amis calls his book a "novelised autobiography", and just as much has been made of its quasi-fictional nature as the many diversions into didactic writing advice that betwixt each chapter: "Not content with being a novel, this book also wants to tell you how to write novels", complained Tim Adams in The Guardian. I suspect that reviewers who grew up with Martin since his debut book in 1973 rolled their eyes at yet another demonstration of his manifest cleverness, but as my first exposure to Amis's gift of observation, I confess that I was thought it was actually kinda clever. Try, for example, "it remains a maddening truth that both sexual success and sexual failure are steeply self-perpetuating" or "a hospital gym is a contradiction like a young Conservative", etc. Then again, perhaps I was experiencing a form of nostalgia for a pre-Gamergate YouTube, when everything in the world was a lot simpler... or at least things could be solved by articulate gentlemen who honed their art of rhetoric at the Oxford Union. I went on to read Martin's first novel, The Rachel Papers (is it 'arrogance' if you are, indeed, that confident?), as well as his 1997 Night Train. I plan to read more of him in the future.

The Collected Essays, Journalism and Letters: Volume 1 & Volume 2 & Volume 3 & Volume 4 George Orwell These deceptively bulky four volumes contain all of George Orwell's essays, reviews and correspondence, from his teenage letters sent to local newspapers to notes to his literary executor on his deathbed in 1950. Reading this was part of a larger, multi-year project of mine to cover the entirety of his output. By including this here, however, I'm not recommending that you read everything that came out of Orwell's typewriter. The letters to friends and publishers will only be interesting to biographers or hardcore fans (although I would recommend Dorian Lynskey's The Ministry of Truth: A Biography of George Orwell's 1984 first). Furthermore, many of his book reviews will be of little interest today. Still, some insights can be gleaned; if there is any inconsistency in this huge corpus is that his best work is almost 'too' good and too impactful, making his merely-average writing appear like hackwork. There are some gems that don't make the usual essay collections too, and some of Orwell's most astute social commentary came out of series of articles he wrote for the left-leaning newspaper Tribune, related in many ways to the US Jacobin. You can also see some of his most famous ideas start to take shape years if not decades before they appear in his novels in these prototype blog posts. I also read Dennis Glover's novelised account of the writing of Nineteen-Eighty Four called The Last Man in Europe, and I plan to re-read some of Orwell's earlier novels during 2021 too, including A Clergyman's Daughter and his 'antebellum' Coming Up for Air that he wrote just before the Second World War; his most under-rated novel in my estimation. As it happens, and with the exception of the US and Spain, copyright in the works published in his lifetime ends on 1st January 2021. Make of that what you will.

Capitalist Realism & Chavs: The Demonisation of the Working Class Mark Fisher & Owen Jones These two books are not natural companions to one another and there is likely much that Jones and Fisher would vehemently disagree on, but I am pairing these books together here because they represent the best of the 'political' books I read in 2020. Mark Fisher was a dedicated leftist whose first book, Capitalist Realism, marked an important contribution to political philosophy in the UK. However, since his suicide in early 2017, the currency of his writing has markedly risen, and Fisher is now frequently referenced due to his belief that the prevalence of mental health conditions in modern life is a side-effect of various material conditions, rather than a natural or unalterable fact "like weather". (Of course, our 'weather' is being increasingly determined by a combination of politics, economics and petrochemistry than pure randomness.) Still, Fisher wrote on all manner of topics, from the 2012 London Olympics and "weird and eerie" electronic music that yearns for a lost future that will never arrive, possibly prefiguring or influencing the Fallout video game series. Saying that, I suspect Fisher will resonate better with a UK audience more than one across the Atlantic, not necessarily because he was minded to write about the parochial politics and culture of Britain, but because his writing often carries some exasperation at the suppression of class in favour of identity-oriented politics, a viewpoint not entirely prevalent in the United States outside of, say, Tour F. Reed or the late Michael Brooks. (Indeed, Fisher is likely best known in the US as the author of his controversial 2013 essay, Exiting the Vampire Castle, but that does not figure greatly in this book). Regardless, Capitalist Realism is an insightful, damning and deeply unoptimistic book, best enjoyed in the warm sunshine I found it an ironic compliment that I had quoted so many paragraphs that my Kindle's copy protection routines prevented me from clipping any further. Owen Jones needs no introduction to anyone who regularly reads a British newspaper, especially since 2015 where he unofficially served as a proxy and punching bag for expressing frustrations with the then-Labour leader, Jeremy Corbyn. However, as the subtitle of Jones' 2012 book suggests, Chavs attempts to reveal the "demonisation of the working class" in post-financial crisis Britain. Indeed, the timing of the book is central to Jones' analysis, specifically that the stereotype of the "chav" is used by government and the media as a convenient figleaf to avoid meaningful engagement with economic and social problems on an austerity ridden island. (I'm not quite sure what the US equivalent to 'chav' might be. Perhaps Florida Man without the implications of mental health.) Anyway, Jones certainly has a point. From Vicky Pollard to the attacks on Jade Goody, there is an ignorance and prejudice at the heart of the 'chav' backlash, and that would be bad enough even if it was not being co-opted or criminalised for ideological ends. Elsewhere in political science, I also caught Michael Brooks' Against the Web and David Graeber's Bullshit Jobs, although they are not quite methodical enough to recommend here. However, Graeber's award-winning Debt: The First 5000 Years will be read in 2021. Matt Taibbi's Hate Inc: Why Today's Media Makes Us Despise One Another is worth a brief mention here though, but its sprawling nature felt very much like I was reading a set of Substack articles loosely edited together. And, indeed, I was.

The Golden Thread: The Story of Writing Ewan Clayton A recommendation from a dear friend, Ewan Clayton's The Golden Thread is a journey through the long history of the writing from the Dawn of Man to present day. Whether you are a linguist, a graphic designer, a visual artist, a typographer, an archaeologist or 'just' a reader, there is probably something in here for you. I was already dipping my quill into calligraphy this year so I suspect I would have liked this book in any case, but highlights would definitely include the changing role of writing due to the influence of textual forms in the workplace as well as digression on ergonomic desks employed by monks and scribes in the Middle Ages. A lot of books by otherwise-sensible authors overstretch themselves when they write about computers or other technology from the Information Age, at best resulting in bizarre non-sequiturs and dangerously Panglossian viewpoints at worst. But Clayton surprised me by writing extremely cogently and accurate on the role of text in this new and unpredictable era. After finishing it I realised why for a number of years, Clayton was a consultant for the legendary Xerox PARC where he worked in a group focusing on documents and contemporary communications whilst his colleagues were busy inventing the graphical user interface, laser printing, text editors and the computer mouse.

New Dark Age & Radical Technologies: The Design of Everyday Life James Bridle & Adam Greenfield I struggled to describe these two books to friends, so I doubt I will suddenly do a better job here. Allow me to quote from Will Self's review of James Bridle's New Dark Age in the Guardian:
We're accustomed to worrying about AI systems being built that will either "go rogue" and attack us, or succeed us in a bizarre evolution of, um, evolution what we didn't reckon on is the sheer inscrutability of these manufactured minds. And minds is not a misnomer. How else should we think about the neural network Google has built so its translator can model the interrelation of all words in all languages, in a kind of three-dimensional "semantic space"?
New Dark Age also turns its attention to the weird, algorithmically-derived products offered for sale on Amazon as well as the disturbing and abusive videos that are automatically uploaded by bots to YouTube. It should, by rights, be a mess of disparate ideas and concerns, but Bridle has a flair for introducing topics which reveals he comes to computer science from another discipline altogether; indeed, on a four-part series he made for Radio 4, he's primarily referred to as "an artist". Whilst New Dark Age has rather abstract section topics, Adam Greenfield's Radical Technologies is a rather different book altogether. Each chapter dissects one of the so-called 'radical' technologies that condition the choices available to us, asking how do they work, what challenges do they present to us and who ultimately benefits from their adoption. Greenfield takes his scalpel to smartphones, machine learning, cryptocurrencies, artificial intelligence, etc., and I don't think it would be unfair to say that starts and ends with a cynical point of view. He is no reactionary Luddite, though, and this is both informed and extremely well-explained, and it also lacks the lazy, affected and Private Eye-like cynicism of, say, Attack of the 50 Foot Blockchain. The books aren't a natural pair, for Bridle's writing contains quite a bit of air in places, ironically mimics the very 'clouds' he inveighs against. Greenfield's book, by contrast, as little air and much lower pH value. Still, it was more than refreshing to read two technology books that do not limit themselves to platitudinal booleans, be those dangerously naive (e.g. Kevin Kelly's The Inevitable) or relentlessly nihilistic (Shoshana Zuboff's The Age of Surveillance Capitalism). Sure, they are both anti-technology screeds, but they tend to make arguments about systems of power rather than specific companies and avoid being too anti-'Big Tech' through a narrower, Silicon Valley obsessed lens for that (dipping into some other 2020 reading of mine) I might suggest Wendy Liu's Abolish Silicon Valley or Scott Galloway's The Four. Still, both books are superlatively written. In fact, Adam Greenfield has some of the best non-fiction writing around, both in terms of how he can explain complicated concepts (particularly the smart contract mechanism of the Ethereum cryptocurrency) as well as in the extremely finely-crafted sentences I often felt that the writing style almost had no need to be that poetic, and I particularly enjoyed his fictional scenarios at the end of the book.

The Algebra of Happiness & Indistractable: How to Control Your Attention and Choose Your Life Scott Galloway & Nir Eyal A cocktail of insight, informality and abrasiveness makes NYU Professor Scott Galloway uncannily appealing to guys around my age. Although Galloway definitely has his own wisdom and experience, similar to Joe Rogan I suspect that a crucial part of Galloway's appeal is that you feel you are learning right alongside him. Thankfully, 'Prof G' is far less err problematic than Rogan (Galloway is more of a well-meaning, spirited centrist), although he, too, has some pretty awful takes at time. This is a shame, because removed from the whirlwind of social media he can be really quite considered, such as in this long-form interview with Stephanie Ruhle. In fact, it is this kind of sentiment that he captured in his 2019 Algebra of Happiness. When I look over my highlighted sections, it's clear that it's rather schmaltzy out of context ("Things you hate become just inconveniences in the presence of people you love..."), but his one-two punch of cynicism and saccharine ("Ask somebody who purchased a home in 2007 if their 'American Dream' came true...") is weirdly effective, especially when he uses his own family experiences as part of his story:
A better proxy for your life isn't your first home, but your last. Where you draw your last breath is more meaningful, as it's a reflection of your success and, more important, the number of people who care about your well-being. Your first house signals the meaningful your future and possibility. Your last home signals the profound the people who love you. Where you die, and who is around you at the end, is a strong signal of your success or failure in life.
Nir Eyal's Indistractable, however, is a totally different kind of 'self-help' book. The important background story is that Eyal was the author of the widely-read Hooked which turned into a secular Bible of so-called 'addictive design'. (If you've ever been cornered by a techbro wielding a Wikipedia-thin knowledge of B. F. Skinner's behaviourist psychology and how it can get you to click 'Like' more often, it ultimately came from Hooked.) However, Eyal's latest effort is actually an extended mea culpa for his previous sin and he offers both high and low-level palliative advice on how to avoid falling for the tricks he so studiously espoused before. I suppose we should be thankful to capitalism for selling both cause and cure. Speaking of markets, there appears to be a growing appetite for books in this 'anti-distraction' category, and whilst I cannot claim to have done an exhausting study of this nascent field, Indistractable argues its points well without relying on accurate-but-dry "studies show..." or, worse, Gladwellian gotchas. My main criticism, however, would be that Eyal doesn't acknowledge the limits of a self-help approach to this problem; it seems that many of the issues he outlines are an inescapable part of the alienation in modern Western society, and the only way one can really avoid distraction is to move up the income ladder or move out to a 500-acre ranch.

27 January 2021

Russ Allbery: Review: A Deadly Education

Review: A Deadly Education, by Naomi Novik
Series: The Scholomance #1
Publisher: Del Rey
Copyright: 2020
ISBN: 0-593-12849-4
Format: Kindle
Pages: 319
Some children are born with magic, which grows as they mature. Magic attracts maleficaria: extremely deadly magical beasts that want to feast on that magic. Having innate magical ability is therefore a recipe for endless attacks from monsters and a death at a young age. This was true even for the enclaves, which are the rich, gated communities of the magical world. Hence, the Scholomance. This is a boarding school for magic users placed in the Void and protected against maleficaria as completely as possible while still letting the students graduate and leave after their senior year. Students are sent there via a teleportation spell with a weight allowance, taught magic by automated systems and magical artifacts, and left on their own to make alliances and survive. Or not survive; protected as well as possible still means that there are maleficaria everywhere, sneaking past the wards of the graduation hall and looking for snacks. The school sends cleansing fire through the halls at certain times; the rest of the time, the students either learn enough magic to defeat maleficaria themselves, form alliances with those who can, or die to feed the magic of the school. Enter Galadriel, or El as she prefers. She's not an enclave kid; she's the grumpy, misfit daughter of a hippie mother whose open-hearted devotion to healing and giving away her abilities make her the opposite of the jealously guarded power structures of the enclaves. El has no resources other than what she can muster on her own. She also has her mother's ethics, which means that although she has an innate talent for malia, drawing magic from the death of other living things, she forces herself to build her mana through rigorously ethical means. Like push-ups. Or, worse, crochet. At the start of the book, El is in her third year of four, and significantly more of her classmates are alive than normally would be. That's because of her classmate, Orion Lake, who has made a full-time hobby of saving everyone from maleficaria. His unique magical ability frees him from the constraints of mana or malia that everyone else is subject to, and he uses that to be a hero, surrounded by adoring fans. And El is thoroughly sick of it. This book is so good in so many different ways that I don't know where to start. Obviously, A Deadly Education is a twist on the boarding school novel, both the traditional and the magical kind. This is not a genre in which I'm that well-read, but even with my lack of familiarity, I noticed so many things Novik does to improve the genre tropes, starting with not making the heroic character with the special powers the protagonist. And getting rid of all the adults, which leaves way more space for rich social dynamics between the kids (complex and interesting ones that are entangled with the social dynamics outside of the school, not some simplistic Lord of the Flies take). Going alone anywhere in the school is dangerous, as is sitting at the bad tables in the cafeteria, so social cliques become a matter of literal life and death. And the students aren't just trying to survive; the ones who aren't part of enclaves are jockeying for invitations or trying to build the power to help their family and allies form their own. El is the first-person narrator of the story and she's wonderful. She's grumpy, cynical, and sarcastic, which is often good for first-person narrators, but she also has a core of ethics from her mother, and from her own decisions, that gives her so much depth. She is the type of person who knows exactly how much an ethical choice will cost her and how objectively stupid it is, and then will make it anyway out of sheer stubbornness and refuse to take credit for it. I will happily read books about characters like El until the end of time. Her mother never appears in this book, and yet she's such a strong presence because El's relationship with her matters, to both El and to the book. El could not be more unlike her mother in both personality and in magical focus, and she's exasperated by the sheer impracticality of some of her mother's ideals. And yet there's a core of love and understanding beneath that, a level at which El completely understands her mother's goals, and El relies on it even when she doesn't realize she's doing so. I don't think I've ever read a portrayal of a mother-daughter relationship this good where one of the parties isn't even present. And I haven't even gotten to the world-building, and the level to which Novik chases down and explores all the implications of this ridiculous murder machine of a school. I will offer this caveat: If you poke at the justification for creating this school in the way it was built, it's going to teeter a lot. That society thought this school was the best solution to its child mortality problem is just something you have to roll with. But once you accept that, the implications are handled so very well. The school is an inhuman character in its own right, with exasperating rules that the students learn and warn each other about. It tries to distract you with rare spellbooks or artifact materials because it's trying to kill you. The language tapes whisper horrific stories of your death. The back wall of your room is a window to the Void, from which you can demand spellbooks. You'll even get them in languages that you understand, for a generous definition of understand that may have involved glancing at one page of text, so be careful not to do that! The school replaces all of the adult teachers in the typical boarding school novel and is so much more interesting than any of them because it adds the science fiction thrill of setting as character. The world-building does mean a lot of infodumping, so be prepared for that. El likes to explain things, tell stories, and over-analyze her life, and reading this book is a bit like reading the journal of a teenage girl. For me, El's voice is so strong, authentic, stubborn, and sarcastically funny that I scarcely noticed the digressions into background material. And the relationships! Some of the turns will be predictable, since of course El's stubborn ethics will be (eventually) rewarded by the story, but the dynamic that develops between El and Orion is something special. It takes a lot to make me have sympathy with the chosen one boy hero, but Novik pulls it off without ever losing sight of the dynamics of class and privilege that are also in play. And the friendships El develops almost accidentally by being stubbornly herself are just wonderful, and the way she navigates them made me respect her even more. The one negative thing I will say about this book is that I don't think Novik quite nailed the climax. Some of this is probably because this is the first book of a series and Novik wanted to hold some social developments in reserve, but I thought El got a bit sidelined and ended up along for the ride in an action-movie sequence. Still, it's a minor quibble, and it's clear from the very end of the book that El is going to get more attention and end up in a different social position in the next book. This was a wholly engrossing and enjoyable story with a satisfying climax and only the barb of a cliffhanger in the very last line. It's the best SFF novel published in 2020 that I've read so far (yes, even better than Network Effect). Highly recommended, and I hope it gets award recognition this year. Followed by The Last Graduate (not yet published at the time of this review). Rating: 9 out of 10

23 December 2020

John Goerzen: How & Why To Use Airgapped Backups

A good backup strategy needs to consider various threats to the integrity of data. For instance: It s that last one that is of particular interest today. A lot of backup strategies are such that if a user (or administrator) has their local account or network compromised, their backups could very well be destroyed as well. For instance, do you ssh from the account being backed up to the system holding the backups? Or rsync using a keypair stored on it? Or access S3 buckets, etc? It is trivially easy in many of these schemes to totally ruin cloud-based backups, or even some other schemes. rsync can be run with delete (and often is, to prune remotes), S3 buckets can be deleted, etc. And even if you try to lock down an over-network backup to be append-only, still there are vectors for attack (ssh credentials, OpenSSL bugs, etc). In this post, I try to explore how we can protect against them and still retain some modern conveniences. A backup scheme also needs to make a balance between: My story so far About 20 years ago, I had an Exabyte tape drive, with the amazing capacity of 7GB per tape! Eventually as disk prices fell, I had external disks plugged in to a server, and would periodically rotate them offsite. I ve also had various combinations of partial or complete offsite copies over the Internet as well. I have around 6TB of data to back up (after compression), a figure that is growing somewhat rapidly as I digitize some old family recordings and videos. Since I last wrote about backups 5 years ago, my scheme has been largely unchanged; at present I use ZFS for local and to-disk backups and borg for the copies over the Internet. Let s take a look at some options that could make this better. Tape The original airgapped backup. You back up to a tape, then you take the (fairly cheap) tape out of the drive and put in another one. In cost per GB, tape is probably the cheapest medium out there. But of course it has its drawbacks. Let s start with cost. To get a drive that can handle capacities of what I d be needing, at least LTO-6 (2.5TB per tape) would be needed, if not LTO-7 (6TB). New, these drives cost several thousand dollars, plus they need LVD SCSI or Fibre Channel cards. You re not going to be hanging one off a Raspberry Pi; these things need a real server with enterprise-style connectivity. If you re particularly lucky, you might find an LTO-6 drive for as low as $500 on eBay. Then there are tapes. A 10-pack of LTO-6 tapes runs more than $200, and provides a total capacity of 25TB sufficient for these needs (note that, of course, you need to have at least double the actual space of the data, to account for multiple full backups in a set). A 5-pack of LTO-7 tapes is a little more expensive, while providing more storage. So all-in, this is going to be in the best possible scenario nearly $1000, and possibly a lot more. For a large company with many TB of storage, the initial costs can be defrayed due to the cheaper media, but for a home user, not so much. Consider that 8TB hard drives can be found for $150 $200. A pair of them (for redundancy) would run $300-400, and then you have all the other benefits of disk (quicker access, etc.) Plus they can be driven by something as cheap as a Raspberry Pi. Fancier tape setups involve auto-changers, but then you re not really airgapped, are you? (If you leave all your tapes in the changer, they can generally be selected and overwritten, barring things like hardware WORM). As useful as tape is, for this project, it would simply be way more expensive than disk-based options. Fundamentals of disk-based airgapping The fundamental thing we need to address with disk-based airgapping is that the machines being backed up have no real-time contact with the backup storage system. This rules out most solutions out there, that want to sync by comparing local state with remote state. If one is willing to throw storage efficiency out the window maybe practical for very small data sets one could just send a full backup daily. But in reality, what is more likely needed is a way to store a local proxy for the remote state. Then a runner device (a USB stick, disk, etc) could be plugged into the network, filled with queued data, then plugged into the backup system to have the data dequeued and processed. Some may be tempted to short-circuit this and just plug external disks into a backup system. I ve done that for a long time. This is, however, a risk, because it makes those disks vulnerable to whatever may be attacking the local system (anything from lightning to ransomware). ZFS ZFS is, it should be no surprise, particularly well suited for this. zfs send/receive can send an incremental stream that represents a delta between two checkpoints (snapshots or bookmarks) on a filesystem. It can do this very efficiently, much more so than walking an entire filesystem tree. Additionally, with the recent addition of ZFS crypto to ZFS on Linux, the replication stream can optionally reflect the encrypted data. Yes, as long as you don t need to mount them, you can mostly work with ZFS datasets on an encrypted basis, and can directly tell zfs send to just send the encrypted data instead of the decrypted data. The downside of ZFS is the resource requirements at the destination, which in terms of RAM are higher than most of the older Raspberry Pi-style devices. Still, one could perhaps just save off zfs send streams and restore them later if need be, but that implies a periodic resend of a full stream, an inefficient operation. dedpulicating software such as borg could be used on those streams (though with less effectiveness if they re encrypted). Tar Perhaps surprisingly, tar in listed incremental mode can solve this problem for non-ZFS users. It will keep a local cache of the state of the filesystem as of the time of the last run of tar, and can generate new tarballs that reflect the changes since the previous run (even deletions). This can achieve a similar result to the ZFS send/receive, though in a much less elegant way. Bacula / Bareos Bacula (and its fork Bareos) both have support for a FIFO destination. Theoretically this could be used to queue of data for transfer to the airgapped machine. This support is very poorly documented in both and is rumored to have bitrotted, however. rdiff and xdelta rdiff and xdelta can be used as sort of a non-real-time rsync, at least on a per-file basis. Theoretically, one could generate a full backup (with tar, ZFS send, or whatever), take an rdiff signature, and send over the file while keeping the signature. On the next run, another full backup is piped into rdiff, and on the basis of the signature file of the old and the new data, it produces a binary patch that can be queued for the backup target to update its stored copy of the file. This leaves history preservation as an exercise to be undertaken on the backup target. It may not necessarily be easy and may not be efficient. rsync batches rsync can be used to compute a delta between two directory trees and express this as a single-file batch that can be processed by a remote rsync. Unfortunately this implies the sender must always keep an old tree around (barring a solution such as ZFS snapshots) in order to compute the delta, and of course it still implies the need for history processing on the remote. Getting the Data There OK, so you ve got an airgapped system, some sort of runner device for your sneakernet (USB stick, hard drive, etc). Now what? Obviously you could just copy data on the runner and move it back off at the backup target. But a tool like NNCP (sort of a modernized UUCP) offer a lot of help in automating the process, returning error reports, etc. NNCP can be used online over TCP, over reliable serial links, over ssh, with offline onion routing via intermediaries or directly, etc. Imagine having an airgapped machine at a different location you go to frequently (workplace, friend, etc). Before leaving, you put a USB stick in your pocket. When you get there, you pop it in. It s despooled and processed while you want, and return emails or whatever are queued up to be sent when you get back home. Not bad, eh? Future installment I m going to try some of these approaches and report back on my experiences in the next few weeks.

22 December 2020

Joachim Breitner: Don t think, just defunctionalize

TL;DR: CPS-conversion and defunctionalization can help you to come up with a constant-stack algorithm. Update: Turns out I inadvertedly plagiarized the talk The Best Refactoring You ve Never Heard Of by James Koppel. Please consider this a form of sincere flattery.

The starting point Today, I ll take you on a another little walk through the land of program transformations. Let s begin with a simple binary tree, with value of unknown type in the leaves, as well as the canonical map function:
data T a = L a   B (T a) (T a)
map1 :: (a -> b) -> T a -> T b
map1 f (L x) = L (f x)
map1 f (B t1 t2) = B (map1 f t1) (map1 f t2)
As you can see, this map function is using the program stack as it traverses the tree. Our goal is now to come up with a map function that does not use the stack! Why? Good question! In Haskell, there wouldn t be a strong need for this, as the Haskell stack is allocated on the heap, just like your normal data, so there is plenty of stack space. But in other languages or environments, the stack space may have a hard limit, and it may be advised to not use unbounded stack space. That aside, it s a fun exercise, and that s sufficient reason for me. (In the following, I assume that tail-calls, i.e. those where a function end with another function call, but without modifying its result, do not actually use stack space. Once all recursive function calls are tail calls, the code is equivalent to an imperative loop, as we will see.)

Think? We could now just stare at the problem (rather the code), and try to come up with a solution directly. We d probably think ok, as I go through the tree, I have to remember all the nodes above me so I need a list of those nodes and for each of these nodes, I also need to remember whether I am currently processing the left child, and yet have to look at the right one, or whether I am done with the left child so what do I have to remember about the current node ? ah, my brain spins already. Maybe eventually I figure it out, but why think when we can derive the solution? So let s start with above map1, and rewrite it, in several, mechanical, steps into a stack-less, tail-recursive solution.

Go! Before we set out, let me rewrite the map function using a local go helper, as follows:
map2 :: forall a b. (a -> b) -> T a -> T b
map2 f t = go t
  where
    go :: T a -> T b
    go (L x) = L (f x)
    go (B t1 t2) = B (go t1) (go t2)
This transformation (effectively the static argument transformation ) has the nice advantage that we do not have to pass f around all the time, and that when we copy the function, I only have to change the top-level name, but not the names of the inner functions. Also, I find it more aesthetically pleasing.

CPS A blunt, effective tool to turn code that is not yet using tail-calls into code that only uses tail-calls is use continuation-passing style. If we have a function of type -> t, we turn it into a function of type -> (t -> r) -> r, where r is the type of the result we want at the very end. This means the function now receives an extra argument, often named k for continuation, and instead of returning some x, the function calls k x. We can apply this to our go function. Here, both t and r happen to be T b; the type of finished trees:
map3 :: forall a b. (a -> b) -> T a -> T b
map3 f t = go t (\r -> r)
  where
    go :: T a -> (T b -> T b) -> T b
    go (L x) k  = k (L (f x))
    go (B t1 t2) k  = go t1 (\r1 -> go t2 (\r2 -> k (B r1 r2)))
Note that when initially call go, we pass the identity function (\r -> r) as the initial continuation. Alas, suddenly all function calls are in tail position, and this codes does not use stack space! Technically, we are done, although it is not quite satisfying; all these lambdas floating around obscure the meaning of the code, are maybe a bit slow to execute, and also, we didn t really learn much yet. This is certainly not the code we would have writing after thinking hard .

Defunctionalization So let s continue rewriting the code to something prettier, simpler. Something that does not use lambdas like this. Again, there is a mechanical technique that can help it. It likely won't make the code prettier, but it will get rid of the lambdas, so let s do that an clean up later. The technique is called defunctionalization (because it replaces functional values by plain data values), and can be seen as a form of refinement. Note that we pass around vales of type (T b -> T b), but we certainly don t mean the full type (T b -> T b). Instead, only very specific values of that type occur in our program, So let us replace (T b -> T b) with a data type that contains representatives of just the values we actually use.
  1. We find at all values of type (T b -> T b). These are:
    • (\r -> r)
    • (\r1 -> go t2 (\r2 -> k (B r1 r2)))
    • (\r2 -> k (B r1 r2))
  2. We create a datatype with one constructor for each of these:
     data K = I   K1   K2
    (This is not complete yet.)
  3. We introduce an interpretation function that turns a K back into a (T b -> T b):
    eval :: K -> (T b -> T b)
    eval = (* TBD *)
  4. In the function go, instead of taking a parameter of type (T b -> T b), we take a K. And when we actually use the continuation, we have to turn the K back to the function using eval:
    go :: T a -> K a b -> T b
    go (L x) k  = eval k (L (f x))
    go (B t1 t2) k = go t1 K1
    We also do this to the code fragments identified in the first step; these become:
    • (\r -> r)
    • (\r1 -> go t2 K2)
    • (\r2 -> eval k (B r1 r2))
  5. Now we complete the eval function: For each constructor, we simply map it to the corresponding lambda from step 1:
    eval :: K -> (T b -> T b)
    eval I = (\r -> r)
    eval K1 = (\r1 -> go t2 K2)
    eval K2 = (\r2 -> eval k (B r1 r2))
  6. This doesn t quite work yet: We have variables on the right hand side that are not bound (t2, r1, k). So let s add them to the constructors K1 and K2 as needed. This also changes the type K itself; it now needs to take type parameters.
This leads us to the following code:
data K a b
  = I
    K1 (T a) (K a b)
    K2 (T b) (K a b)
map4 :: forall a b. (a -> b) -> T a -> T b
map4 f t = go t I
  where
    go :: T a -> K a b -> T b
    go (L x) k  = eval k (L (f x))
    go (B t1 t2) k  = go t1 (K1 t2 k)
    eval :: K a b -> (T b -> T b)
    eval I = (\r -> r)
    eval (K1 t2 k) = (\r1 -> go t2 (K2 r1 k))
    eval (K2 r1 k) = (\r2 -> eval k (B r1 r2))
Not really cleaner or prettier, but everything is still tail-recursive, and we are now working with plain data.

We like lists To clean it up a little bit, we can notice that the K data type really is just a list of values, where the values are either T a or T b. We do not need a custom data type for this! Instead of our K, we can just use the following, built from standard data types:
type K' a b = [Either (T a) (T b)]
Now I replace I with [], K1 t2 k with Left t2 : k and K2 r1 k with Right r1 : k. I also, very suggestively, rename go to down and eval to up:
map5 :: forall a b. (a -> b) -> T a -> T b
map5 f t = down t []
  where
    down :: T a -> K' a b -> T b
    down (L x) k  = up k (L (f x))
    down (B t1 t2) k  = down t1 (Left t2 : k)
    up :: K' a b -> T b -> T b
    up [] r = r
    up (Left  t2 : k) r1 = down t2 (Right r1 : k)
    up (Right r1 : k) r2 = up k (B r1 r2)
At this point, the code suddenly makes more sense again. In fact, I can try to verbalize it:
As we traverse the tree, we have to remember for all parent nodes, whether there is still something Left to do when we come back to it (so we remember a T a), or if we are done with that (so we have a T b). This is the list K' a b. We begin to go down the left of the tree (noting that the right siblings are still left to do), until we hit a leaf. We transform the leaf, and then go up. If we go up and hit the root, we are done. Else, if we go up and there is something Left to do, we remember the subtree that we just processed (as that is already in the Right form), and go down the other subtree. But if we go up and there is nothing Left to do, we put the two subtrees together and continue going up.
Quite neat!

The imperative loop At this point we could stop: the code is pretty, makes sense, and has the properties we want. But let s turn the dial a bit further and try to make it an imperative loop. We know that if we have a single tail-recursive function, then that s equivalent to a loop, with the function s parameter turning into mutable variables. But we have two functions! It turns out that if you have two functions a -> r and b -> r that have the same return type (which they necessarily have here, since we CPS-converted them further up), then those two functions are equivalent to a single function taking a or b , i.e. Either a b -> r. This really nothing else than the high-school level algebra rule of ra rb = ra + b. So (after reordering the arguments of down to put T b first) we can rewrite the code as
map6 :: forall a b. (a -> b) -> T a -> T b
map6 f t = go (Left t) []
  where
    go :: Either (T a) (T b) -> K' a b -> T b
    go (Left (L x))     k        = go (Right (L (f x))) k
    go (Left (B t1 t2)) k        = go (Left t1) (Left t2 : k)
    go (Right r)  []             = r
    go (Right r1) (Left  t2 : k) = go (Left t2) (Right r1 : k)
    go (Right r2) (Right r1 : k) = go (Right (B r1 r2)) k
Do you see the loop yet? If not, maybe it helps to compare it with the following equivalent imperative looking pseudo-code:
mapLoop :: forall a b. (a -> b) -> T a -> T b
mapLoop f t  
  var node = Left t;
  var parents = [];
  while (true)  
    switch (node)  
      Left (L x) -> node := Right (L (f x))
      Left (B t1 t2) -> node := Left t1; parents.push(Left t2)
      Right r1 ->  
        if (parents.len() == 0)  
          return r1;
          else  
          switch (parents.pop())  
            Left t2  -> node := Left t2; parents.push(Right r1);
            Right r2 -> node := Right (B r1 r2)
           
         
       
     
   
 

Conclusion I find it enlightening to see how apparently very different approaches to a problem (recursive, lazy functions and imperative loops) are connected by a series of rather mechanical transformations. When refactoring code, it is helpful to see if one can conceptualize the refactoring as one of those mechanical steps (refinement, type equivalences, defunctionalization, cps conversion etc.) If you liked this post, you might enjoy my talk The many faces of isOrderedTree, which I have presented at MuniHac 2019 and Haskell Love 2020.

14 December 2020

Jonathan Dowland: git rebasing and lab books

For my PhD work, I've been working on preparing an experimental branch of StrIoT for merging down to the main branch. This has been a long-lived branch (a year!) within which I've been exploring some ideas. Some of the code I want to keep, and some I don't. The history of the experimental branch is consequently messy. Looking it over and considering what a reviewer needs to see, there's a lot of things that are irrelevant and potentially distracting. And so, I've been going through an iterative process of steadily whittling down the history to the stuff that matters: some strings of commits are dropped, others squashed together, and others re-ordered. The resulting branch is a historic fiction. This is common practice. Joey Hess ruminated about it 5 years ago in "our beautiful fake histories", pointing out that the real history is also useful, and perhaps worth preserving. After a recent conversation with my supervisor I realised the situation was analagous to writing a research paper (or a thesis): the process of getting to the conclusion which the thesis documents is messy, with false starts, wrong directions, and plenty of roads-not-travelled. The eventual write-up focusses on the path that lead to the conclusion, and a lot of the side-quest stuff disappears. The "true history" then, is captured elsewhere: in lab books, diaries and the like, and these have their own value. So do my messy exploratory branches, before they've been cleaned up for merging.

13 December 2020

Russ Allbery: Review: Because Internet

Review: Because Internet, by Gretchen McCulloch
Publisher: Riverhead Books
Copyright: 2019
ISBN: 0-7352-1095-0
Format: Kindle
Pages: 276
If you're familiar with linguistics as a field of scientific study (as opposed to the tool-based fields of grammar or writing advice), you'll be familiar with the dichotomy between written and spoken language. We may spend more time thinking about written language since it is central to most types of education and carries much of the intellectual and social weight of society. Linguists, however, see spoken language as more fundamental, since speech is wired into our brains and universal in human societies. Written language is a recent and somewhat artificial invention. One also learns from linguistics that spoken language does not follow many of the rules of written language that we painstakingly memorized in school. In casual speech, people split infinitives, speak in partial and run-on sentences, ignore nit-picking pronoun case rules, and rarely notice or care about the difference between less and fewer. Spoken language does have rules, but they're more subtle and nuanced than the grammar rules we learn in school. (I think the real fun of linguistics is separating the rules that native speakers follow effortlessly from the artificial rules used as education markers.) This is, in part, because nearly all spoken language is informal, whereas nearly all written language is formal. Enter the Internet, and enter this book. For the first time in human history we have both an explosion of informal writing and easy availability of that writing to linguists for study. Informal writing is not entirely new, of course. We've had personal letters for nearly as long as we've had writing, not to mention private notes, diaries, and other writing intended for tiny audiences. But consider who wrote private letters and, on top of that historical filter, whose private letters were preserved for linguistic research. Until relatively recently, only the upper classes were literate and had access to the infrastructure to write and send letters. Someone's letters or private notes were unlikely to be preserved unless they were someone famous and important, and thus often well-educated and more likely to take a more formal tone in writing. If you compare this to the Internet-driven blizzard of work and personal email, SMS conversations, chatrooms, and social media posts, the difference is obvious in both volume and level of informality. We're all on the Internet, we all read and write with a frequency that would be staggering to the average person from even fifty years ago, and while one may take a bit of additional care with a tricky email to one's manager, the SMS message to one's friend is as informal of a use of language as a conversation over coffee. Gretchen McCulloch is a professional linguist and Because Internet is about exactly this phenomenon: the new conventions of informal writing, how it has changed and evolved, and the new subtleties and shortcuts we've invented to make written communication easier. That goes beyond words and grammar to encompass punctuation, emoji and emoticons, memes and reaction gifs, and even the subtleties of timing, whitespace, and the construction of virtual places via our choices in how and where we write. This topic is my catnip, so it's not surprising I love this book. I've been heavily involved with online communities that communicate in writing since 1993 (making me, in McCulloch's classification, an Old Internet Person; each wave of introduction to the Internet has its own conventions that can be in conflict with later waves). I've now spent more than half my life carrying out most of my social activity and most of my closest friendships primarily in writing, so I found a lot of satisfaction in a linguistic study that takes that seriously rather than treating it as a curiosity. But, even better, I was amazed at how much I didn't know, in part because I am from a specific wave. I have a deep intuition for the Usenet conventions, but not as good of an understanding of the ones from AIM and LiveJournal one wave later (the Full Internet People). And I had a lot to learn about the conventions of the Instagram and Snapchat cluster (the Post Internet People, who have never known life without the Internet). One of the things that struck me while reading this book is how most of the language innovations that McCulloch describes are addressing the old complaint that written communication is inferior to face-to-face conversation because it lacks emotional nuance. My knee-jerk reply is that, no, written communication is full of emotional nuance and the complainer is just bad at reading it, but that's somewhat unfair. A better statement of the problem is that there is not a standardized language for emotional nuance in written communication, in part because it's so new in human history. Most humans are extremely good at reading facial expressions and body language for emotional cues, and those physical expressions are largely subconscious, reliable, and similar among different people (particularly within a culture; one can get in trouble with body language variations across cultures). This is not true of writing. With friends I've talked to over chat for twenty-five years, I can read volumes about their emotional state in a couple of short lines of text. But with strangers, despite decades of Internet communications, I will still misread cues and misinterpret simple intentions. The other standard response to this complaint is that it is possible to put extensive emotional nuance into formal writing. Just get better at writing! This is true, but unhelpful. There's a reason why we give book contracts to people who are very good at investing formal writing with emotional nuance. It's difficult, time-consuming, and requires a great deal of practice. That may be appropriate for formal, paid writing, but it won't do for informal writing, which by definition needs to be as effortless as possible. It's therefore unsurprising that once millions of people were using the Internet regularly for informal writing, they started adding new mechanisms, shortcuts, and conventions for emotional nuance. The standardization is growing, but conventions still vary widely between waves of Internet users. One of the most fascinating parts of this book for me was McCulloch's explanation of why periods (and, to a lesser extent, capital letters) in short chat messages are perceived by younger users as harsh or passive-aggressive. I still have the formal writing mindset of treating proper capitalization and punctuation as a point of pride, but McCulloch makes an excellent argument for letting go of my biases and understanding how and why language is changing. The realization I had while reading this is that many of the changes that look like sloppiness or laziness to someone trained in formal writing have the effect of giving language greater dynamic range. If one always uses periods uniformly, the period becomes meaningless except as a sentence boundary (which is redundant with newlines in most short informal chat messages). If one normally doesn't use it, and then suddenly starts using it, the period can carry semantic weight. It can convey a snippy tone of voice, a note of annoyance, or other subtle shades of meaning. I still use periods in most of my Slack messages because habits are hard to break, but I'm remembering to leave them off some of the time and paying more attention to what emotional weight they're carrying when present. Because Internet is therefore the rare book that meets the bar of changing my day-to-day behavior. "lol" is another excellent example that McCulloch spends some time on. It started life as LOL, an abbreviation for "laughing out loud," and that's still how it's stuck in my head. But, as McCulloch explains, it no longer means that to newer waves of Internet users. It now carries a far more complicated and nuanced meaning that has very little to do with physical laughter and that doesn't easily translate to a single word or sentence. I went from being mildly irritated by and mildly superior towards the ubiquitous "lol" to realizing that it's a fascinating new word that carries primarily emotional nuance and that I don't understand well enough to read or use properly (yet). One more example of the type of analysis McCulloch brings to this book: emoji. The tendency when talking about emoji is to treat them as rebuses (pictures that stand in for a word, or at least a specific concept). They are sometimes used that way, but McCulloch argues that they more often function in the same role that gestures play in informal speech, including the gestures that have no simple name and no independent meaning outside of the context of the words being said at the same time. This seems obvious in retrospect, but before reading Because Internet I had never thought about what a gesture is, what function it plays in speech, and how that could be translated into informal written communication. If you're as interested in this area as I am, this is great stuff. I'd seen several mentions of this book go past on Twitter and kept holding off because I had lots of things to read and was worried it would only cover the superficial things I already knew as a long-time Internet user who has listened to a few lectures on linguistics. That was not the case at all. I learned so much from this book and had a delightful time reading it. If you're also interested in these topics, recommended. Rating: 9 out of 10

28 November 2020

Mark Brown: Book club: Rust after the honeymoon

Earlier this month Daniel, Lars and myself got together to discuss Bryan Cantrill s article Rust after the honeymoon. This is an overview of what keeps him enjoying working with Rust after having used it for an extended period of time for low level systems work at Oxide, we were particularly interested to read a perspective from someone who was both very experienced in general and had been working with the language for a while. While I have no experience with Rust both Lars and Daniel have been using it for a while and greatly enjoy it. One of the first areas we discussed was data bearing enums these have been very important to Bryan. In keeping with a pattern we all noted these take a construct that s relatively commonly implemented by hand in C (or skipped as too much effort, as Lars found) and provides direct support in the language for it. For both Daniel and Lars this has been key to their enjoyment of Rust, it makes things that are good practice or common idioms in C and C++ into first class language features which makes them more robust and allows them to fade into the background in a way they can t when done by hand. Daniel was also surprised by some omissions, some small such as the ? operator but others much more substantial the standout one being editions. These aim to address the problems seen with version transitions in other languages like Python, allowing individual parts of a Rust program to adopt potentially incompatible language features while remaining interoperability with older editions of the language rather than requiring the entire program to be upgraded en masse. This helps Rust move forwards with less need to maintain strict source level compatibility, allowing much more rapid evolution and helping deal with any issues that are found. Lars expressed the results of this very clearly, saying that while lots of languages offer a 20%/80% solution which does very well in specific problem domains but has issues for some applications Rust is much more able to move towards a much more general applicability by addressing problems and omissions as they are understood. This distracted us a bit from the actual content of the article and we had an interesting discussion of the issues with handling OS differences in filenames portably. Rather than mapping filenames onto a standard type within the language and then have to map back out into whatever representation the system actually uses Rust has an explicit type for filenames which must be explicitly converted on those occasions when it s required, meaning that a lot of file handling never needs to worry about anything except the OS native format and doesn t run into surprises. This is in keeping with Rust s general approach to interfacing with things that can t be represented in its abstractions, rather than hide things it keeps track of where things that might break the assumptions it makes are and requires the programmer to acknowledge and handle them explicitly. Both Lars and Daniel said that this made them feel a lot more confident in the code that they were writing and that they had a good handle on where complexity might lie, Lars noted that Rust is the first languages he s felt comfortable writing multi threaded code in. We all agreed that the effect here was more about having idioms which tend to be robust and both encourage writing things well and gives readers tools to help know where particular attention is required no tooling can avoid problems entirely. This was definitely an interesting discussion for me with my limited familiarity with Rust, hopefully Daniel and Lars also got a lot out of it!

3 October 2020

Ritesh Raj Sarraf: First Telescope

Curiosity I guess this would be common to most of us. While I grew up, right from the childhood itself, the sky was always an intriguing view. The Stars, the Moon, the Eclipses; were all fascinating. As a child, in my region, religion and culture; the mythology also built up stories around it. Lunar Eclipses have a story of its own. During Solar Eclipses, parents still insist that we do not go out. And to be done with the food eating before/after the eclipse. Then there s the Hindu Astrology part, which claims its own theories and drags in mythology along. For example, you ll still find the Hindu Astrology making recommendations to follow certain practices with the planets, to get auspicious personal results. As far as I know, other religions too have similar beliefs about the planets. As a child, we are told the Moon to be addressed as an Uncle ( ). There s also a rhyme around it, that many of us must have heard. And if you look at our god, Lord Mahadev, he s got a crescent on his head
Lord Mahadev
Lord Mahadev

Reality Fast-forward to today, as I grew, so did some of my understanding. It is fascinating how mankind has achieved so much understanding of our surrounding. You could go through the documentaries on Mars Exploration, for example; to see how the rovers are providing invaluable data. As a mere individual, there s a limit to what one can achieve. But the questions flow in free.
  • Is there life beyond us
  • What s out there in the sky
  • Why is all this the way it is

Hobby The very first step, for me, for every such curiosity, has been to do the ground work, with the resources I have. To study on the subject. I have done this all my life. For example, I started into the Software domain as: A curiosity => A Hobby => A profession Same was the case with some of the other hobbies, equally difficult as Astronomy, that I developed a liking for. Just did the ground work, studied on those topics and then applied the knowledge to further improve it and build up some experience. And star gazing came in no different. As a complete noob, had to start with the A B C on the subject of Astronomy. Familiarize myself with the usual terms. As so on PS: Do keep in mind that not all hobbies have a successful end. For example, I always craved to be good with graphic designing, image processing and the likes, where I ve always failed. Never was able to keep myself motivated enough. Similar was my experience when trying to learn playing a musical instrument. Just didn t work out for me, then. There s also a phase in it, where you fail and then learn from the failures and proceed further, and then eventually succeed. But we all like to talk about the successes. :-)

Astronomy So far, my impression has been that this topic/domain will not suit most of the people. While the initial attraction may be strong, given the complexity and perseverance that Astronomy requires, most people would lose interest in it very soon. Then there s the realization factor. If one goes with an expectation to get quick results, they may get disappointed. It isn t like a point and shoot device that d give you results on the spot. There s also the expectation side of things. If you are a person more accustomed to taking pretty selfies, which always come right because the phone manufacturer does heavy processing on the images to ensure that you get to see the pretty fake self, for the most of the times; then star gazing with telescopes could be a frustrating experience altogether. What you get to see in the images on the internet will be very different than what you d be able to see with your eyes and your basic telescope. There s also the cost aspect. The more powerful (and expensive) your telescope, the better your view. And all things aside, it still may get you lose interest, after you ve done all the ground work and spent a good chunk of money on it. Simply because the object you are gazing at is more a still image, which can quickly get boring for many. On the other hand, if none of the things obstruct, then the domain of Astronomy can be quite fascinating. It is a continuous learning domain (reminds me of CI in our software field these days). It is just the beginning for us here, and we hope to have a lasting experience in it.

The Internet I have been indebted to the internet right from the beginning. The internet is what helped me be able to achieve all I wanted. It is one field with no boundaries. If there is a will, there is a way; and often times, the internet is the way.
  • I learnt computers over the internet.
  • Learnt more about gardening and plants over the internet
  • Learnt more about fish care-taking over the internet
And many many more things. Some of the communities over the internet are a great way to participation. They bridge the age gap, the regional gap and many more. For my Astronomy need, I was glad to see so many active communities, with great participants, on the internet.

Telescope While there are multiple options to start star gazing, I chose to start with a telescope. But as someone completely new to this domain, there was a long way to go. And to add to that, the real life: work + family I spent a good 12+ months reading up on the different types of telescopes, what they are, their differences, their costs, their practical availability etc. The good thing is that the market has offerings for everything. From a very basic binocular to a fully automatic Maksutov-Cassegrain scope. It all would depend on your budget.

Automatic vs Manual To make it easy for the users, the market has multiple options in the offering. One could opt-in for a cheap, basic and manually operated telescope; which would require the user to do a lot of ground study. On the other hand, users also have the option of automatic telescopes which do the hard work of locating and tracking the planetary objects. Either option aside, the end result of how much you ll be able to observe the sky, still depends on many many more factors: Enthusiasm over time, Light Pollution, Clear Skies, Timing etc. PS: The planetary objects move at a steady pace. Objects you lock into your view now will be gone out of the FOV in just a matter of minutes.

My Telescope After spending so much of the time reading up on types of telescopes, my conclusion was that a scope with high aperture and focal length was the way to go forward. This made me shorten the list to Dobsonians. But the Dobsonians aren t a very cheap telescope, whether manual or automatic. My final decision made me acquire a 6" Dobsonian Telescope. It is a Newtonian Reflecting Telescope with a 1200mm focal length and 150mm diameter. Another thing about this subject is that most of the stuff you do in Astronomy; right from the telescope selection, to installation, to star gazing; most of it is DIY, so your mileage may vary with the end result and experience. For me, installation wasn t very difficult. I was able to assemble the base Dobsonian mount and the scope in around 2 hours. But the installation manual, I had been provided with, was very brief. I ended up with one module in the mount wrongly fit, which I was able to fix later, with the help of online forums.
Dobsonian Mount
Dobsonian Mount
In this image you can see that the side facing out, where the handle will go, is wrong. If fit this way, the handle will not withstand any weight at all.
Correct Panel Side
Correct Panel Side
The right fix of the handle base board. In this image, the handle is on the other side that I m holding. Because the initial fit put in some damage to the engineered wood, I fixed it up by sealing with some adhesive. With that, this is what my final telescope looks like.
Final Telescope
Final Telescope

Clear Skies While the telescope was ready, the skies were not. For almost next 10 days, we had no clear skies at all. All I could do was wait. Wait so much that I had forgotten to check on the skies. Luckily, my wife noticed clear skies this week for a single day. Clear enough that we could try out our telescope for the very first time.
Me posing for a shot
Me posing for a shot

Telescope As I said earlier, in my opinion, it takes a lot of patience and perseverance on this subject. And most of the things here are DIY. To start with, we targeted the Moon. Because it is easy. I pointed the scope to the moon, then looked into the finder scope to center it, and then looked through the eyepiece. And blank. Nothing out there. Turns out, the finder scope and the viewer s angle weren t aligned. This is common and the first DIY step, when you plan to use your telescope for viewing. Since our first attempt was unplanned and just random because we luckily spotted that the skies were clear, we weren t prepared for this. Lucky enough, mapping the difference in the alignment, in the head, is not very difficult. After a couple of minutes, I could make out the point in the finder scope, where the object if projected, would show proper in the viewer. With that done, it was just mesmerizing to see the Moon, in a bit more detail, than what I ve seen all these years of my life.
Moon
Moon
Moon
Moon
Moon
Moon
Moon
Moon
The images are not exactly what we saw with our eyes. The view was much more vivid than these pictures. But as a first timer, I really wanted to capture this first moment of a closer view of the Moon. In the whole process; that of ground work studying about telescopes, installation of the telescope, astronomy basics and many other things; the most difficult part in this entire journey, was to point my phone to the viewing eyepiece, to get a shot of the object. This requirement just introduced me to astrophotography. And then, Dobsonians aren t the best model for astrophotography, to what I ve learnt so far. Hopefully, I ll find my ways to do some DIY astrophotography with the tools I have. Or extend my arsenal over time. But overall, we ve been very pleased with the subject of Astronomy. It is a different feel altogether and we re glad to have forayed into it.

7 August 2020

Jonathan Dowland: Vimwiki

At the start of the year I begun keeping a daily diary for work as a simple text file. I've used various other approaches for this over the years, including many paper diaries and more complex digital systems. One great advantage of the one-page text file was it made assembling my weekly status report email very quick, nearly just a series of copies and pastes. But of course there are drawbacks and room for improvement. vimwiki is a personal wiki plugin for the vim and neovim editors. I've tried to look at it before, years ago, but I found it too invasive, changing key bindings and display settings for any use of vim, and I use vim a lot. I decided to give it another look. The trigger was actually something completely unrelated: Steve Losh's blog post "Coming Home to vim". I've been using vim for around 17 years but I still learned some new things from that blog post. In particular, I've never bothered to Use The Leader for user-specific shortcuts. The Leader, to me, feels like a namespace that plugins should not touch: it's like the /usr/local of shortcut keys, a space for the local user only. Vimwiki's default bindings include several incorporating the Leader. Of course since I didn't use the leader, those weren't the ones that bothered me: It turns out I regularly use carriage return and backspace for moving the cursor around in normal mode, and Vimwiki steals both of those. It also truncates the display of (what it thinks are) URIs. It turns out I really prefer to see exactly what's in the file I'm editing. I haven't used vim folds since I first switched to it, despite them being why I switched. Disabling all the default bindings and URI concealing stuff and Vimwiki is now much less invasive and I can explore its features at my own pace:
let g:vimwiki_key_mappings =   'all_maps': 0,  
let g:vimwiki_conceallevel = 0
let g:vimwiki_url_maxsave = 0 
Followed by explicitly configuring the bindings I want. I'm letting it steal carriage return. And yes, I've used some Leader bindings after all.
nnoremap <leader>ww :VimwikiIndex<cr>
nnoremap <leader>wi :VimwikiDiaryIndex<cr>
nnoremap <leader>wd :VimwikiMakeDiaryNote<cr>
nnoremap <CR> :VimwikiFollowLink<cr>
nnoremap <Tab> :VimwikiNextLink<cr>
nnoremap <S-Tab> :VimwikiPrevLink<cr>
nnoremap <C-Down> :VimwikiDiaryNextDay<cr>
nnoremap <C-Up> :VimwikiDiaryPrevDay<cr>
,wd (my leader) now brings me straight to today's diary page, and I can create separate, non-diary pages for particular work items (e.g. a Ticket reference) that will span more than one day, and keep all the relevant stuff in one place.

18 July 2020

Chris Lamb: The comedy is over

By now everyone must have seen the versions of comedy shows with the laugh track edited out. The removal of the laughter doesn't just reveal the artificial nature of television and how it conscripts the viewer into laughing along; by subverting key conversational conventions, it reveals some of the myriad and subtle ways humans communicate with one another:
Although the show's conversation is ostensibly between two people, the viewer serves as a silent third actor through which they and therefore we are meant to laugh along with. Then, when this third character is forcibly muted, viewers not only have to endure the stilted gaps, they also sense an uncanny loss of familiarity by losing their 'own' part in the script. A similar phenomenon can be seen in other art forms. In Garfield Minus Garfield, the forced negative spaces that these pauses introduce are discomfiting, almost to the level of performance art:
But when the technique is applied to other TV shows such as The Big Bang Theory, it is unsettling in entirely different ways, exposing the dysfunctional relationships and the adorkable mysogny at the heart of the show:
Once you start to look for it, the ur-elements of the audience, response and timing in the way we communicate are everywhere, from the gaps we leave so that others instinctively know when you have finished speaking, to the myriad of ways you can edit a film. These components are always present, it is only when one of them is taken away that they become more apparent. Today, the small delays added by videoconferencing adds an uncanny awkwardness to many of our everyday interactions too. It is said that "comedy is tragedy plus timing", so it is unsurprising that Zoom's undermining of timing leads, by this simple calculus of human interactions, to feelings of... tragedy.

Leaving aside the usual comments about Pavlovian conditioning and the shows that are the exceptions, complaints against canned laughter are the domain of the pub bore. I will therefore only add two brief remarks. First, rather than being cynically added to artificially inflate the lack of 'real' comedy, laugh tracks were initially added to replicate the live audience of existing shows. In other words, without a laugh track, these new shows might have ironically appeared almost as eerie as the fan edits cited above are today. Secondly, although laugh tracks are described as "false", this is not entirely correct. After all, someone did actually laugh, even if it was for an entirey different joke. In his Simulacra and Simulation, cultural theorist Jean Baudrillard might have poetically identified canned laughter as a "reflection of a profound reality", rather than an outright falsehood. One day, when this laughter becomes entirely algorithmically generated, Baudrillard would describe it as "an order of sorcery", placing it metaphysically on the same level as the entirely pumpkin-free Pumpkin Spiced Latte.

For a variety of reasons I recently decided to try interacting with various social media platforms in new ways. One way of loosening my addiction to this pornography of the amygdala was to hide the number of replies, 'likes' and related numbers:
The effect of installing this extension was immediate. I caught my eyes darting to where the numbers had been and realised I had been subconsciously looking for the input and perhaps even the outright validation of the masses. To be sure, these numbers can be relevant and sometimes useful, but they do implicitly involve delegating part of your responsibility of thinking for yourself to the vox populi, or the Greek chorus of the 21st century. Like many of you reading this, I am sure I told myself that the number of 'likes' has no bearing on whether I should agree with something, but hiding the numbers reveals much of this might have been a convenient fiction; as an entire century of discoveries in behavioural economics has demonstrated, all the pleasingly-satisfying arguments for rational free-market economics stand no chance against our inherent buggy mammalian brains.

Tying a few things together, when attempting to doomscroll through social media without these numbers, I realised that social media without the scorecard of engagement is almost exactly like watching these shows without the laugh track. Without the number of 'retweets', the lazy prompts to remind you exactly when, how and for how much to respond are removed, and replaced with the same stilted silences of those edited scenes from Friends. At times, the existential loneliness of Garfield Minus Garfield creeps in too, and there is more than enough of the dysfunctional, validation-seeking and parasocial 'conversations' of The Big Bang Theory. Most of all, the whole exercise permits a certain level of detached, critical analysis, allowing one to observe that the platforms often feel like a pre-written script with your 'friends' cast as actors, all perpetuated on the heady fumes of rows INSERT-ed into a database on the other side of the world. I'm not quite sure how this will affect my usage of the platforms, and any time spent away from these sites may mean fewer online connections at a time when we all need them the most. But as the Karal Marling, professor at the University of Minnesota wrote about artificial audiences: "Let me be the laugh track."

12 July 2020

Enrico Zini: Police brutality links

I was a police officer for nearly ten years and I was a bastard. We all were.
We've detected that JavaScript is disabled in your browser. Would you like to proceed to legacy Twitter?
As nationwide protests over the deaths of George Floyd and Breonna Taylor are met with police brutality, John Oliver discusses how the histories of policing ...
La morte di Stefano Cucchi avvenne a Roma il 22 ottobre 2009 mentre il giovane era sottoposto a custodia cautelare. Le cause della morte e le responsabilit sono oggetto di procedimenti giudiziari che hanno coinvolto da un lato i medici dell'ospedale Pertini,[1][2][3][4] dall'altro continuano a coinvolgere, a vario titolo, pi militari dell Arma dei Carabinieri[5][6]. Il caso ha attirato l'attenzione dell'opinione pubblica a seguito della pubblicazione delle foto dell'autopsia, poi riprese da agenzie di stampa, giornali e telegiornali italiani[7]. La vicenda ha ispirato, altres , documentari e lungometraggi cinematografici.[8][9][10]
La morte di Giuseppe Uva avvenne il 14 giugno 2008 dopo che, nella notte tra il 13 e il 14 giugno, era stato fermato ubriaco da due carabinieri che lo portarono in caserma, dalla quale venne poi trasferito, per un trattamento sanitario obbligatorio, nell'ospedale di Varese, dove mor la mattina successiva per arresto cardiaco. Secondo la tesi dell'accusa, la morte fu causata dalla costrizione fisica subita durante l'arresto e dalle successive violenze e torture che ha subito in caserma. Il processo contro i due carabinieri che eseguirono l'arresto e contro altri sei agenti di polizia ha assolto gli imputati dalle accuse di omicidio preterintenzionale e sequestro di persona[1][2][3][4]. Alla vicenda dedicato il documentario Viva la sposa di Ascanio Celestini[1][5].
Il caso Aldrovandi la vicenda giudiziaria causata dall'uccisione di Federico Aldrovandi, uno studente ferrarese, avvenuta il 25 settembre 2005 a seguito di un controllo di polizia.[1][2][3] I procedimenti giudiziari hanno condannato, il 6 luglio 2009, quattro poliziotti a 3 anni e 6 mesi di reclusione, per "eccesso colposo nell'uso legittimo delle armi";[1][4] il 21 giugno 2012 la Corte di cassazione ha confermato la condanna.[1] All'inchiesta per stabilire la cause della morte ne sono seguite altre per presunti depistaggi e per le querele fra le parti interessate.[1] Il caso stato oggetto di grande attenzione mediatica e ha ispirato un documentario, stato morto un ragazzo.[1][5]
Federico Aldrovandi (17 July 1987 in Ferrara 25 September 2005 in Ferrara) was an Italian student, who was killed by four policemen.[1]
24 Giugno 2020

17 June 2020

Ulrike Uhlig: On Language

Language is a tool of power In school, we read the philologist diary of Victor Klemperer about the changes in the German language during the Third Reich, LTI - Lingua Tertii Imperii, a book which makes it clear that the use of language is political, creates realities, and has reverse repercussions on concepts of an entire society. Language was one of the tools that supported Nazism in insiduously pervading all parts of society. Language shapes our concepts of society Around the same time, a friend of mine proposed to read Egalia's daughters by Gerd Brantenberg, a book in which gendered words were reversed: so that human becomes huwim, for example. This book made me take notice of gendered concepts that often go unnoticed. Language shapes the way we think and feel I spent a large part of my adult life in France, which confronted me with the realization that a language provides its speakers with certain concepts. If a concept does not exist in a language, people cannot easily feel or imagine this concept either. Back then (roughly 20 years ago), even though I was aware of gender inequality, I hated using gender neutral language because in German and French it felt unnatural, and, or so I thought, we were all alike. One day, at a party, we played a game that consisted in guessing people's professions by asking them Yes/No questions. Turns out that we were unable to guess that the woman we were talking with was a doctor, because we could simply not imagine this profession for a young woman. In French, docteur is male and almost nobody would use the word doctoresse, ou femme docteur. Unimaginable are also the concepts of words in German that have no equivalent in French or vice versa: Or, to make all this a bit less serious, Italian has the word gattara (female) or gattaro (male), which one could translate to English roughly as cat person, most often designating old women who feed stray cats. But really, the way language shapes our concepts and ideas goes much further, as well explained by Lera Boroditsky in a talk in which she explains how language influences concepts of space, time, and blame, among other things. Building new models This quote by Buckminster Fuller is pinned on the wall over my desk:
You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete.
A change in language is such a new model: it can make oppression and inequalities visible. Words do not only describe our world, they are a vehicle of ideas, and utopias. Analyzing and criticizing our use of language means paving the way for ideas and concepts of inclusion, equality, and unity. You might be guessing at where am I getting at with this Right: I am in favor of acknowledging past mistakes, and replacing oppressive metaphors in computing. As noted in the IETF draft about Terminology, Power and Oppressive Language, by Niels Ten Oever and Mallory Knodel, the metaphors "master/slave" and "blacklist/whitelist" associate "white with good and black with evil [which] is known as the 'bad is black effect'", all the while being technically inaccurate. I acknowledge that this will take time. There is a lot of work to do.

Russ Allbery: Review: Network Effect

Review: Network Effect, by Martha Wells
Series: Murderbot Diaries #5
Publisher: Tor
Copyright: May 2020
ISBN: 1-250-22984-7
Format: Kindle
Pages: 351
Network Effect is the first Murderbot novel, although the fifth story of the series. The previous stories, beginning with All Systems Red, were novellas. Under no circumstances should you start reading the series here. Network Effect builds significantly on the story arc that ended with Exit Strategy and resolves some important loose ends from Artificial Condition. It's meant to be read in series order. I believe this is the first time in my life that I've started reading a book on the night of its release. I was looking forward to this novel that much, and it does not disappoint. I'll try not to spoil the previous books too much in this review, but at this point it's a challenge. Just go read them. They're great. The big question I had about the first Murderbot novel was how would it change the plot dynamic of the series. All of the novellas followed roughly the same plot structure: Murderbot would encounter some humans who need help, somewhat grudgingly help them while pursuing its own agenda, snark heavily about human behavior in the process, once again prove its competence, and do a little bit of processing of its feelings and a lot of avoiding them. This formula works great at short length. Would Wells change it at novel length, or if not, would it get tedious or strained? The answer is that Wells added in quite a bit more emotional processing and relationship management to flesh out the core of the book and created a plot with more layers and complexity than the novella plots, and the whole construction works wonderfully. This is exactly the book I was hoping for when I heard there would be a Murderbot novel. If you like the series, you'll like this, and should feel free to read it now without reading the rest of the review.
Overse added, "Just remember you're not alone here." I never know what to say to that. I am actually alone in my head, and that's where 90 plus percent of my problems are.
Many of the loose ends in the novellas were tied up in the final one, Exit Strategy. The biggest one that wasn't, at least in my opinion, was ART, the research transport who helped Murderbot considerably in Artificial Condition and clearly was more than it appeared to be. That is exactly the loose end that Wells resolves here, to great effect. I liked the dynamic between ART and Murderbot before, but it's so much better with an audience to riff off of (and yet better still when there are two audiences, one who already knew Murderbot and one who already knew ART). I like ART almost as much as Murderbot, and that's saying a lot. The emotional loose end of the whole series has been how Murderbot will decide to interact with other humans. I think that's not quite resolved by the end of the novel, but we and Murderbot have both learned considerably more. The novellas, except for the first, are mostly solo missions even when Murderbot is protecting clients. This is something more complicated; the interpersonal dynamics hearken back to the first novella and then go much deeper, particularly in the story-justified flashbacks. Wells uses Murderbot's irritated avoidance to keep some emotional dynamics underplayed and indirect, letting the reader discover them at opportune moments, and this worked beautifully for me. And Murderbot's dynamic with Amena is just wonderful, mostly because of how smart, matter-of-fact, trusting, and perceptive Amena is. That's one place where the novel length helps: Wells has more room to expand the characterization of characters other than Murderbot, something that's usually limited in the novellas to a character or two. And these characters are great. Murderbot is clearly the center of the story, but the other characters aren't just furniture for it to react to. They have their own story arcs, they're thoughtful, they learn, and it's a delight to watch them slot Murderbot into various roles, change their minds, adjust, and occasionally surprise it in quite touching ways, all through Murderbot's eyes.
Thiago had said he felt like he should apologize and talk to me more about it. Ratthi had said, "I think you should let it go for a while, at least until we get ourselves out of this situation. SecUnit is a very private person, it doesn't like to discuss its feelings." This is why Ratthi is my friend.
I have some minor quibbles. The targetSomething naming convention Murderbot comes up with and then is stuck with because it develops too much momentum is entertaining but confusing. A few of the action sequences were just a little on the long side; I find the emotional processing much more interesting. There's also a subplot with a character with memory holes and confusion that I thought dragged on too long, mostly because I found the character intensely irritating for some reason. But these are just quibbles. Network Effect is on par with the best of the novellas that precede it, and that's a high bar indeed. In this series, Wells has merged the long-running science fiction thread of artificial intelligences and the humanity of robots with the sarcastic and introspective first-person narration of urban fantasy, gotten the internal sensation of emotional avoidance note-perfect without making it irritating (that's some deep magic right there), and added in some top-tier negotiation of friendship and relationships without losing the action and excitement of a great action movie. It's a truly impressive feat and the novel is the best installment so far. I will be stunned if Network Effect doesn't make most of the award lists next year. Followed by Fugitive Telemetry, due out in April of 2021. You can believe that I have already preordered it. Rating: 9 out of 10

19 April 2020

Enrico Zini: Little wonders

Gibbsdavidl/CatterPlots
devel
Did you ever wish you could make scatter plots with cat shaped points? Now you can! - Gibbsdavidl/CatterPlots
What is the best tool to use for drawing vector pictures? For me and probably for many others, the answer is pretty obvious: Illustrator, or, maybe, Inkscape.
A coloring book to help folks understand how SELinux works. - mairin/selinux-coloring-book
The EURion constellation (also known as Omron rings[1] or doughnuts[2]) is a pattern of symbols incorporated into a number of banknote designs worldwide since about 1996. It is added to help imaging software detect the presence of a banknote in a digital image. Such software can then block the user from reproducing banknotes to prevent counterfeiting using colour photocopiers. According to research from 2004, the EURion constellation is used for colour photocopiers but probably not used in computer software.[3] It has been reported that Adobe Photoshop will not allow editing of an image of a banknote, but in some versions this is believed to be due to a different, unknown digital watermark rather than the EURion constellation.[4][3]
This huge collection of non-scary optical illusions and fascinating visual phenomena emphasizes interactive exploration, beauty, and scientific explanation.
Generated photos are created from scratch by AI systems. All images can be used for any purpose without worrying about copyrights, distribution rights, infringement claims, or royalties.
Dokumentarfilm ber die Rangierer im Bahnhof Dresden-Friedrichstadt in der DDR aus dem Jahr 1984.
Il termine sardo femina accabadora, femina agabbad ra o, pi comunemente, agabbadora o accabadora (s'agabbad ra, lett. "colei che finisce", deriva dal sardo s'acabbu, "la fine" o dallo spagnolo acabar, "terminare") denota la figura storicamente incerta di una donna che si incaricava di portare la morte a persone di qualunque et , nel caso in cui queste fossero in condizioni di malattia tali da portare i familiari o la stessa vittima a richiederla. In realt non ci sono prove di tale pratica, che avrebbe riguardato alcune regioni sarde come Marghine, Planargia e Gallura[1]. La pratica non doveva essere retribuita dai parenti del malato poich il pagare per dare la morte era contrario ai dettami religiosi e della superstizione.
Alright the people have spoken and they want more cat genetics. So, I present to you all "Cat Coat Genetics 101: A Tweetorial", feat. pics of many real life cats (for science, of course...this baby is Caterpillar).

3 November 2017

Rog rio Brito: Comparison of JDK installation of various Linux distributions

Today I spent some time in the morning seeing how one would install the JDK on Linux distributions. This is to create a little comparative tutorial to teach introductory Java. Installing the JDK is, thanks to the OpenJDK developers in Debian and Ubuntu (Matthias Klose and helpers), a very easy task. You simply type something like:
apt-get install openjdk-8-jdk
Since for a student it is better to have everything for experiments, I install the full version, not only the -headless version. Given my familiarity with Debian/Ubuntu, I didn't have to think about the way of installing it, of course. But as this is a tutorial meant to be as general as I can, I tried also to include instructions on how to install Java on other distributions. The first two that came to my mind were openSUSE and Fedora. Both use the RPM package format for their "native" packages (in the same sense that Debian uses DEB packages for "native" packages). But they use different higher-level tools to install such packages: Fedora uses a tool called dnf, while openSUSE uses zypper. To try these distributions, I got their netinstall ISOs and used qemu/kvm to install on a virtual machine. I used the following to install/run the virtual machines (the example below, is, of course, for openSUSE):
qemu-system-x86_64 -enable-kvm -m 4096 -smp 2 -net nic,model=e1000 -net user -drive index=0,media=disk,cache=unsafe,file=suse.qcow2 -cdrom openSUSE-Leap-42.3-NET-x86_64.iso
The names of the packages also change from one distribution to another. On Fedora, I had to use:
dnf install java-1.8.0-openjdk-devel
On openSUSE, I had to use:
zypper install java-1_8_0-openjdk-devel
Note that one distribution uses dots in the names of the packages while the other uses underscores. One interesting thing that I noticed with dnf was that, when I used it, it automatically refreshed the package lists from the network, something which I forgot, and it was a pleasant surprise. I don't know about zypper, but I guess that it probably had fresh indices when the installation finished. Both installations were effortless after I knew the names of the packages to install. Oh, BTW, in my 5 minute exploration with these distributions, I noticed that if you don't want the JDK, but only the JRE, then you omit the -devel suffix. It makes sense when you think about it, for consistency with other packages, but Debian's conventions also make sense (JRE with -jre suffix, JDK with -jdk suffix). I failed miserably to use Fedora's prebaked, vanilla cloud image, as I couldn't login on this image and I decided to just install the whole OS on a fresh virtual machine. I don't have instructions on how to install on Gentoo nor on Arch, though. I now see how hard it is to cover instructions/provide software for as many distributions as you wish, given the multitude of package managers, conventions etc.

2 March 2017

Joey Hess: what I would ask my lawyers about the new Github TOS

The Internet saw Github's new TOS yesterday and collectively shrugged. That's weird.. I don't have any lawyers, but the way Github's new TOS is written, I feel I'd need to consult with lawyers to understand how it might affect the license of my software if I hosted it on Github. And the license of my software is important to me, because it is the legal framework within which my software lives or dies. If I didn't care about my software, I'd be able to shrug this off, but since I do it seems very important indeed, and not worth taking risks with. If I were looking over the TOS with my lawyers, I'd ask these questions...
4 License Grant to Us
This seems to be saying that I'm granting an additional license to my software to Github. Is that right or does "license grant" have some other legal meaning? If the Free Software license I've already licensed my software under allows for everything in this "License Grant to Us", would that be sufficient, or would my software still be licenced under two different licences? There are violations of the GPL that can revoke someone's access to software under that license. Suppose that Github took such an action with my software, and their GPL license was revoked. Would they still have a license to my software under this "License Grant to Us" or not? "Us" is actually defined earlier as "GitHub, Inc., as well as our affiliates, directors, subsidiaries, contractors, licensors, officers, agents, and employees". Does this mean that if someone say, does some brief contracting with Github, that they get my software under this license? Would they still have access to it under that license when the contract work was over? What does "affiliates" mean? Might it include other companies? Is it even legal for a TOS to require a license grant? Don't license grants normally involve an intentional action on the licensor's part, like signing a contract or writing a license down? All I did was loaded a webpage in a browser and saw on the page that by loading it, they say I've accepted the TOS. (I then set about removing everything from Github.) Github's old TOS was not structured as a license grant. What reasons might they have for structuring this TOS in such a way? Am I asking too many questions only 4 words into this thing? Or not enough?
Your Content belongs to you, and you are responsible for Content you post even if it does not belong to you. However, we need the legal right to do things like host it, publish it, and share it. You grant us and our legal successors the right to store and display your Content and make incidental copies as necessary to render the Website and provide the Service.
If this is a software license, the wording seems rather vague compared with other software licenses I've read. How much wiggle room is built into that wording? What are the chances that, if we had a dispute and this came before a judge, that Github's laywers would be able to find a creative reading of this that makes "do things like" include whatever they want? Suppose that my software is javascript code or gets compiled to javascript code. Would this let Github serve up the javascript code for their users to run as part of the process of rendering their website?
That means you're giving us the right to do things like reproduce your content (so we can do things like copy it to our database and make backups); display it (so we can do things like show it to you and other users); modify it (so our server can do things like parse it into a search index); distribute it (so we can do things like share it with other users); and perform it (in case your content is something like music or video).
Suppose that Github modified my software, does not distribute the modified version, but converts it to javascipt code and distributes that to their users to run as part of the process of rendering their website. If my software is AGPL licensed, they would be in violation of that license, but doesn't this additional license allow them to modify and distribute my software in such a way?
This license does not grant GitHub the right to sell your Content or otherwise distribute it outside of our Service.
I see that "Service" is defined as "the applications, software, products, and services provided by GitHub". Does that mean at the time I accept the TOS, or at any point in the future? If Github tomorrow starts providing say, an App Store service, that necessarily involves distribution of software to others, and they put my software in it, would that be allowed by this or not? If that hypothetical Github App Store doesn't sell apps, but licenses access to them for money, would that be allowed under this license that they want to my software?
5 License Grant to Other Users Any Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and "fork" your repositories (this means that others may make their own copies of your Content in repositories they control).
Let's say that company Foo does something with my software that violates its GPL license and the license is revoked. So they no longer are allowed to copy my software under the GPL, but it's there on Github. Does this "License Grant to Other Users" give them a different license under which they can still copy my software? The word "fork" has a particular meaning on Github, which often includes modification of the software in a repository. Does this mean that other users could modify my software, even if its regular license didn't allow them to modify it or had been revoked? How would this use of a platform-specific term "fork" be interpreted in this license if it was being analized in a courtroom?
If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to access your Content through the GitHub Service, and to use, display and perform your Content, and to reproduce your Content solely on GitHub as permitted through GitHub's functionality. You may grant further rights if you adopt a license.
This paragraph seems entirely innocious. So, what does your keen lawyer mind see in it that I don't? How sure are you about your answers to all this? We're fairly sure we know how well the GPL holds up in court; how well would your interpretation of all this hold up? What questions have I forgotten to ask?
And finally, the last question I'd be asking my lawyers: What's your bill come to? That much? Is using Github worth that much to me?

26 January 2017

John Goerzen: What is happening to America?

I still remember vividly my first visit to Europe, back in 2010. I had just barely gotten off a plane in Hamburg and on to a bus to Lubeck, and struck up a conversation with a friendly, well-educated German classical musician next to me. We soon started to discuss politics and religion. Over the course of the conversation, in response to his questions, I explained I had twice voted against George W. Bush, that I opposed the war in Iraq for many reasons, that I did thought there was an ethical imperative to work to defeat climate change, that I viewed health care as an important ethical and religious issue, that I thought evolution was well-established, and that I am a Christian. Finally, without any hint of insult intended, and rather a lot of surprise written all over his face, he said: Wow. You re an American, and a Christian, and you re so . normal! This, it seems to me, has a lot to do with Trump. Ouch It felt like a punch to the gut. The day after the election, having known that a man that appeared to stand for everything that honorable people are against won the election, like people all around the world, I was trying to make sense of how could this happen? As I ve watched since, as he stacks government with wealthy cronies with records nearly as colorful as his own, it is easy to feel even more depressed. Based on how Trump spoke and acted, it would be easy to conclude that the deplorables won the day that he was elected by a contingent of sexists or racists ascendent in power. But that would be too simple an explanation. This is, after all, the same country that elected Barack Obama twice. There are a many people that voted twice for a black man, and then for Trump. Why? Racism, while doubtless a factor, can t explain it all. How Trump could happen Russ Allbery made some excellent points recently:
[Many Americans are] hurt, and they re scared, and they feel like a lot of the United States just slammed the door in their faces. The status quo is not working for people. Technocratic government by political elites is not working for people. Business as usual is not working for people. Minor tweaks to increasingly arcane systems is not working for people. People are feeling lost in bureaucracy, disaffected by elections that do not present a clear alternate vision, and depressed by a slow slide into increasingly dismal circumstances. Government is not doing what we want it to do for us. And people are getting left behind. The left in the United States (of which I m part) has for many years been very concerned about the way blacks and other racial minorities are systematically pushed to the margins of our economy, and how women are pushed out of leadership roles. Those problems are real. But the loss of jobs in the industrial heartland, the inability of a white, rural, working-class man to support his family the way his father supported him, the collapse of once-vibrant communities into poverty and despair: those problems are real too. The status quo is not working for anyone except for a few lucky, highly-educated people on the coasts. People, honestly, like me, and like many of the other (primarily white and male) people who work in tech. We are one of the few beneficiaries of a system that is failing the vast majority of people in this country.
Russ is, of course, right. The Democrats have been either complicit in policies damaging to many, or ineffective in preventing them. They have often appeared unconcerned with the plight of people outside cities (even if that wasn t really the case). And it goes deeper. When s the last time you visited Kansas? I live in Kansas. The nearest paved road is about a 3-mile drive from my home. The nearest town, population 600, is a 6-mile drive. My governor whom I did not vote for cut taxes on the wealthy so much that our excellent local schools have been struggling for years. But my community is amazing, full of loving and caring people, the sort of people who you know you ll be living with for 40 years, and so you make sure you get along well with. I have visited tourist sites in Berlin, enjoyed an opera and a Broadway show in New York, taken a train across the country to Portland, explored San Francisco. I ve enjoyed all of them. Many rural people do get out and experience the world. I have been in so many conversations where I try to explain where I live to people that simply cannot fathom it. I have explained how the 18 acres I own is a very small amount where I am. How, yes, I do actually have electricity and Internet. How a bad traffic day is one where I have to wait for three cars to go past before turning onto the paved road. How I occasionally find a bull in my front yard, how I can walk a quarter mile and be at the creek on the edge of my property, how I can get to an airport faster than most New Yorkers and my kids can walk out the front door and play in a spot more peaceful than Central Park, and how all this is way cheaper than a studio apartment in a bad part of San Francisco. It is rare indeed to see visitors actually traveling to Kansas as a destination. People have no concept of the fact that my mechanic would drop everything and help me get my broken-down car to the shop for no charge, that any number of neighbors or uncles would bring a tractor and come plow the snow off my 1/4-mile driveway out of sheer kindness, that people around here really care for each other in a way you don t see in a city. There are people that I know see politics way differently than me, but I know them to be good people. They would also do anything for a person in need, no matter who they are. I may find the people that they vote for to be repugnant, but I cannot say I ve looked this person in the eyes and they are nothing but deplorable. And so, people in rural areas feel misunderstood. And they are right. Some perspectives on Trump As I ve said, I do find Trump to be deplorable, but not everyone that voted for him is. How, then, do people wind up voting for him? The New Yorker had an excellent story about a man named Mark Frisbie, owner of a welding and fab shop. The recession had been hard on his business. His wife s day-care center also closed. Health care was hard to find, and the long, slow decline had spanned politicians of every stripe. Mark and his wife supposedly did everything they were supposed to: they worked hard, were honest, were entrepreneurial, and yet he had lost his business, his family house, his health coverage, everything. He doesn t want a handout. He wants to be able to earn a living. Asked who he d vote for, he said, Is none of the above an option? The Washington Post had another insightful article, about a professor from Madison, WI interviewing people in rural areas. She said people would often say: All the decisions are made in Madison and Milwaukee and nobody s listening to us. Nobody s paying attention, nobody s coming out here and asking us what we think. Decisions are made in the cities, and we have to abide by them. She pushed back, hard, on the idea that Trump supporters are ignorant, and added that liberals that push that line of thinking are only making the problem worse. I would agree; seeing all the talk about universities dis-inviting speakers that don t hew to certain political views doesn t help either. A related article talks about the lack of empathy for Trump voters. And then we have a more recent CNN article: Where Tump support and Obamacare use soar together, explaining in great detail how it can be logical for someone to be on Obamacare but not like it. We can all argue that the Republicans may have as much to do with that as anything, but the problem exists. And finally, a US News article makes this point:
His supporters realize he s a joke. They do not care. They know he s authoritarian, nationalist, almost un-American, and they love him anyway, because he disrupts a broken political process and beats establishment candidates who ve long ignored their interests. When you re earning $32,000 a year and haven t had a decent vacation in over a decade, it doesn t matter who Trump appoints to the U.N., or if he poisons America s standing in the world, you just want to win again, whoever the victim, whatever the price. According to the Republican Party, the biggest threat to rural America was Islamic terrorism. According to the Democratic Party it was gun violence. In reality it was prescription drug abuse and neither party noticed until it was too late.
Are we leaving people out? All this reminded me of reading about Donald Knuth, the famous computer scientist and something of the father of modern computing, writing about his feelings of trepidation about sharing with his university colleagues that he was working on a project related to the Bible. I am concerned about the complaints about the PC culture , because I think it is good that people aren t making racist or anti-semitic jokes in public anymore. But, as some of these articles point out, in many circles, making fun of Christians and conservatives is still one of the accepted targets. Does that really help anything? (And as a Christian that is liberal, have all of you that aren t Christians so quickly forgotten how churches like the Episcopals blazed the way for marriage equality many years ago already?) But they don t get a free pass I have found a few things, however, absolutely scary. One was an article from December showing that Trump voters actually changed their views on Russia after Trump became the nominee. Another one from just today was a study on how people reacted when showed inauguration crowd photos. NPR ran a story today as well, on how Trump is treating journalists like China does. Chilling stuff indeed. Conclusion So where does this leave us? Heading into uncertain times, for sure, but perhaps just maybe with a greater understanding of our neighbors. Perhaps we will all be able to see past the rhetoric and polarization, and understand that there is something, well, normal about each other. Doing that is going to be the only way we can really take our country back.

Next.

Previous.